1
|
Amini M, Salimi Y, Hajianfar G, Mainta I, Hervier E, Sanaat A, Rahmim A, Shiri I, Zaidi H. Fully Automated Region-Specific Human-Perceptive-Equivalent Image Quality Assessment: Application to 18 F-FDG PET Scans. Clin Nucl Med 2024; 49:1079-1090. [PMID: 39466652 DOI: 10.1097/rlu.0000000000005526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/30/2024]
Abstract
INTRODUCTION We propose a fully automated framework to conduct a region-wise image quality assessment (IQA) on whole-body 18 F-FDG PET scans. This framework (1) can be valuable in daily clinical image acquisition procedures to instantly recognize low-quality scans for potential rescanning and/or image reconstruction, and (2) can make a significant impact in dataset collection for the development of artificial intelligence-driven 18 F-FDG PET analysis models by rejecting low-quality images and those presenting with artifacts, toward building clean datasets. PATIENTS AND METHODS Two experienced nuclear medicine physicians separately evaluated the quality of 174 18 F-FDG PET images from 87 patients, for each body region, based on a 5-point Likert scale. The body regisons included the following: (1) the head and neck, including the brain, (2) the chest, (3) the chest-abdomen interval (diaphragmatic region), (4) the abdomen, and (5) the pelvis. Intrareader and interreader reproducibility of the quality scores were calculated using 39 randomly selected scans from the dataset. Utilizing a binarized classification, images were dichotomized into low-quality versus high-quality for physician quality scores ≤3 versus >3, respectively. Inputting the 18 F-FDG PET/CT scans, our proposed fully automated framework applies 2 deep learning (DL) models on CT images to perform region identification and whole-body contour extraction (excluding extremities), then classifies PET regions as low and high quality. For classification, 2 mainstream artificial intelligence-driven approaches, including machine learning (ML) from radiomic features and DL, were investigated. All models were trained and evaluated on scores attributed by each physician, and the average of the scores reported. DL and radiomics-ML models were evaluated on the same test dataset. The performance evaluation was carried out on the same test dataset for radiomics-ML and DL models using the area under the curve, accuracy, sensitivity, and specificity and compared using the Delong test with P values <0.05 regarded as statistically significant. RESULTS In the head and neck, chest, chest-abdomen interval, abdomen, and pelvis regions, the best models achieved area under the curve, accuracy, sensitivity, and specificity of [0.97, 0.95, 0.96, and 0.95], [0.85, 0.82, 0.87, and 0.76], [0.83, 0.76, 0.68, and 0.80], [0.73, 0.72, 0.64, and 0.77], and [0.72, 0.68, 0.70, and 0.67], respectively. In all regions, models revealed highest performance, when developed on the quality scores with higher intrareader reproducibility. Comparison of DL and radiomics-ML models did not show any statistically significant differences, though DL models showed overall improved trends. CONCLUSIONS We developed a fully automated and human-perceptive equivalent model to conduct region-wise IQA over 18 F-FDG PET images. Our analysis emphasizes the necessity of developing separate models for body regions and performing data annotation based on multiple experts' consensus in IQA studies.
Collapse
Affiliation(s)
- Mehdi Amini
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Yazdan Salimi
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Ghasem Hajianfar
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Ismini Mainta
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Elsa Hervier
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Amirhossein Sanaat
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | | | - Isaac Shiri
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | | |
Collapse
|
2
|
Salimi Y, Mansouri Z, Amini M, Mainta I, Zaidi H. Explainable AI for automated respiratory misalignment detection in PET/CT imaging. Phys Med Biol 2024; 69:215036. [PMID: 39419113 DOI: 10.1088/1361-6560/ad8857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2024] [Accepted: 10/17/2024] [Indexed: 10/19/2024]
Abstract
Purpose.Positron emission tomography (PET) image quality can be affected by artifacts emanating from PET, computed tomography (CT), or artifacts due to misalignment between PET and CT images. Automated detection of misalignment artifacts can be helpful both in data curation and in facilitating clinical workflow. This study aimed to develop an explainable machine learning approach to detect misalignment artifacts in PET/CT imaging.Approach.This study included 1216 PET/CT images. All images were visualized and images with respiratory misalignment artifact (RMA) detected. Using previously trained models, four organs including the lungs, liver, spleen, and heart were delineated on PET and CT images separately. Data were randomly split into cross-validation (80%) and test set (20%), then two segmentations performed on PET and CT images were compared and the comparison metrics used as predictors for a random forest framework in a 10-fold scheme on cross-validation data. The trained models were tested on 20% test set data. The model's performance was calculated in terms of specificity, sensitivity, F1-Score and area under the curve (AUC).Main results.Sensitivity, specificity, and AUC of 0.82, 0.85, and 0.91 were achieved in ten-fold data split. F1_score, sensitivity, specificity, and AUC of 84.5 vs 82.3, 83.9 vs 83.8, 87.7 vs 83.5, and 93.2 vs 90.1 were achieved for cross-validation vs test set, respectively. The liver and lung were the most important organs selected after feature selection.Significance.We developed an automated pipeline to segment four organs from PET and CT images separately and used the match between these segmentations to decide about the presence of misalignment artifact. This methodology may follow the same logic as a reader detecting misalignment through comparing the contours of organs on PET and CT images. The proposed method can be used to clean large datasets or integrated into a clinical scanner to indicate artifactual cases.
Collapse
Affiliation(s)
- Yazdan Salimi
- Division of Nuclear medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Zahra Mansouri
- Division of Nuclear medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Mehdi Amini
- Division of Nuclear medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Ismini Mainta
- Division of Nuclear medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
- University Research and Innovation Center, Óbuda University, Budapest, Hungary
| |
Collapse
|
3
|
Sanaat A, Hu Y, Boccalini C, Salimi Y, Mansouri Z, Teixeira EPA, Mathoux G, Garibotto V, Zaidi H. Tracer-Separator: A Deep Learning Model for Brain PET Dual-Tracer (18F-FDG and Amyloid) Separation. Clin Nucl Med 2024:00003072-990000000-01360. [PMID: 39468375 DOI: 10.1097/rlu.0000000000005511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/30/2024]
Abstract
INTRODUCTION Multiplexed PET imaging revolutionized clinical decision-making by simultaneously capturing various radiotracer data in a single scan, enhancing diagnostic accuracy and patient comfort. Through a transformer-based deep learning, this study underscores the potential of advanced imaging techniques to streamline diagnosis and improve patient outcomes. PATIENTS AND METHODS The research cohort consisted of 120 patients spanning from cognitively unimpaired individuals to those with mild cognitive impairment, dementia, and other mental disorders. Patients underwent various imaging assessments, including 3D T1-weighted MRI, amyloid PET scans using either 18F-florbetapir (FBP) or 18F-flutemetamol (FMM), and 18F-FDG PET. Summed images of FMM/FBP and FDG were used as proxy for simultaneous scanning of 2 different tracers. A SwinUNETR model, a convolution-free transformer architecture, was trained for image translation. The model was trained using mean square error loss function and 5-fold cross-validation. Visual evaluation involved assessing image similarity and amyloid status, comparing synthesized images with actual ones. Statistical analysis was conducted to determine the significance of differences. RESULTS Visual inspection of synthesized images revealed remarkable similarity to reference images across various clinical statuses. The mean centiloid bias for dementia, mild cognitive impairment, and healthy control subjects and for FBP tracers is 15.70 ± 29.78, 0.35 ± 33.68, and 6.52 ± 25.19, respectively, whereas for FMM, it is -6.85 ± 25.02, 4.23 ± 23.78, and 5.71 ± 21.72, respectively. Clinical evaluation by 2 readers further confirmed the model's efficiency, with 97 FBP/FMM and 63 FDG synthesized images (from 120 subjects) found similar to ground truth diagnoses (rank 3), whereas 3 FBP/FMM and 15 FDG synthesized images were considered nonsimilar (rank 1). Promising sensitivity, specificity, and accuracy were achieved in amyloid status assessment based on synthesized images, with an average sensitivity of 95 ± 2.5, specificity of 72.5 ± 12.5, and accuracy of 87.5 ± 2.5. Error distribution analyses provided valuable insights into error levels across brain regions, with most falling between -0.1 and +0.2 SUV ratio. Correlation analyses demonstrated strong associations between actual and synthesized images, particularly for FMM images (FBP: Y = 0.72X + 20.95, R2 = 0.54; FMM: Y = 0.65X + 22.77, R2 = 0.77). CONCLUSIONS This study demonstrated the potential of a novel convolution-free transformer architecture, SwinUNETR, for synthesizing realistic FDG and FBP/FMM images from summation scans mimicking simultaneous dual-tracer imaging.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Yiyi Hu
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | | | - Yazdan Salimi
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Zahra Mansouri
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | | | - Gregory Mathoux
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | | | | |
Collapse
|
4
|
Salimi Y, Hajianfar G, Mansouri Z, Sanaat A, Amini M, Shiri I, Zaidi H. Organomics: A Concept Reflecting the Importance of PET/CT Healthy Organ Radiomics in Non-Small Cell Lung Cancer Prognosis Prediction Using Machine Learning. Clin Nucl Med 2024; 49:899-908. [PMID: 39192505 DOI: 10.1097/rlu.0000000000005400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/29/2024]
Abstract
PURPOSE Non-small cell lung cancer is the most common subtype of lung cancer. Patient survival prediction using machine learning (ML) and radiomics analysis proved to provide promising outcomes. However, most studies reported in the literature focused on information extracted from malignant lesions. This study aims to explore the relevance and additional value of information extracted from healthy organs in addition to tumoral tissue using ML algorithms. PATIENTS AND METHODS This study included PET/CT images of 154 patients collected from available online databases. The gross tumor volume and 33 volumes of interest defined on healthy organs were segmented using nnU-Net deep learning-based segmentation. Subsequently, 107 radiomic features were extracted from PET and CT images (Organomics). Clinical information was combined with PET and CT radiomics from organs and gross tumor volumes considering 19 different combinations of inputs. Finally, different feature selection (FS; 5 methods) and ML (6 algorithms) algorithms were tested in a 3-fold data split cross-validation scheme. The performance of the models was quantified in terms of the concordance index (C-index) metric. RESULTS For an input combination of all radiomics information, most of the selected features belonged to PET Organomics and CT Organomics. The highest C-index (0.68) was achieved using univariate C-index FS method and random survival forest ML model using CT Organomics + PET Organomics as input as well as minimum depth FS method and CoxPH ML model using PET Organomics as input. Considering all 17 combinations with C-index higher than 0.65, Organomics from PET or CT images were used as input in 16 of them. CONCLUSIONS The selected features and C-indices demonstrated that the additional information extracted from healthy organs of both PET and CT imaging modalities improved the ML performance. Organomics could be a step toward exploiting the whole information available from multimodality medical images, contributing to the emerging field of digital twins in health care.
Collapse
Affiliation(s)
- Yazdan Salimi
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Ghasem Hajianfar
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Zahra Mansouri
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Amirhosein Sanaat
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Mehdi Amini
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | | | | |
Collapse
|
5
|
Sanaat A, Boccalini C, Mathoux G, Perani D, Frisoni GB, Haller S, Montandon ML, Rodriguez C, Giannakopoulos P, Garibotto V, Zaidi H. A deep learning model for generating [ 18F]FDG PET Images from early-phase [ 18F]Florbetapir and [ 18F]Flutemetamol PET images. Eur J Nucl Med Mol Imaging 2024; 51:3518-3531. [PMID: 38861183 PMCID: PMC11445334 DOI: 10.1007/s00259-024-06755-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Accepted: 05/05/2024] [Indexed: 06/12/2024]
Abstract
INTRODUCTION Amyloid-β (Aβ) plaques is a significant hallmark of Alzheimer's disease (AD), detectable via amyloid-PET imaging. The Fluorine-18-Fluorodeoxyglucose ([18F]FDG) PET scan tracks cerebral glucose metabolism, correlated with synaptic dysfunction and disease progression and is complementary for AD diagnosis. Dual-scan acquisitions of amyloid PET allows the possibility to use early-phase amyloid-PET as a biomarker for neurodegeneration, proven to have a good correlation to [18F]FDG PET. The aim of this study was to evaluate the added value of synthesizing the later from the former through deep learning (DL), aiming at reducing the number of PET scans, radiation dose, and discomfort to patients. METHODS A total of 166 subjects including cognitively unimpaired individuals (N = 72), subjects with mild cognitive impairment (N = 73) and dementia (N = 21) were included in this study. All underwent T1-weighted MRI, dual-phase amyloid PET scans using either Fluorine-18 Florbetapir ([18F]FBP) or Fluorine-18 Flutemetamol ([18F]FMM), and an [18F]FDG PET scan. Two transformer-based DL models called SwinUNETR were trained separately to synthesize the [18F]FDG from early phase [18F]FBP and [18F]FMM (eFBP/eFMM). A clinical similarity score (1: no similarity to 3: similar) was assessed to compare the imaging information obtained by synthesized [18F]FDG as well as eFBP/eFMM to actual [18F]FDG. Quantitative evaluations include region wise correlation and single-subject voxel-wise analyses in comparison with a reference [18F]FDG PET healthy control database. Dice coefficients were calculated to quantify the whole-brain spatial overlap between hypometabolic ([18F]FDG PET) and hypoperfused (eFBP/eFMM) binary maps at the single-subject level as well as between [18F]FDG PET and synthetic [18F]FDG PET hypometabolic binary maps. RESULTS The clinical evaluation showed that, in comparison to eFBP/eFMM (average of clinical similarity score (CSS) = 1.53), the synthetic [18F]FDG images are quite similar to the actual [18F]FDG images (average of CSS = 2.7) in terms of preserving clinically relevant uptake patterns. The single-subject voxel-wise analyses showed that at the group level, the Dice scores improved by around 13% and 5% when using the DL approach for eFBP and eFMM, respectively. The correlation analysis results indicated a relatively strong correlation between eFBP/eFMM and [18F]FDG (eFBP: slope = 0.77, R2 = 0.61, P-value < 0.0001); eFMM: slope = 0.77, R2 = 0.61, P-value < 0.0001). This correlation improved for synthetic [18F]FDG (synthetic [18F]FDG generated from eFBP (slope = 1.00, R2 = 0.68, P-value < 0.0001), eFMM (slope = 0.93, R2 = 0.72, P-value < 0.0001)). CONCLUSION We proposed a DL model for generating the [18F]FDG from eFBP/eFMM PET images. This method may be used as an alternative for multiple radiotracer scanning in research and clinical settings allowing to adopt the currently validated [18F]FDG PET normal reference databases for data analysis.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland.
| | - Cecilia Boccalini
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland.
- Laboratory of Neuroimaging and Innovative Molecular Tracers (NIMTlab), Geneva University Neurocenter and Faculty of Medicine, University of Geneva, Geneva, Switzerland.
| | - Gregory Mathoux
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Daniela Perani
- Vita-Salute San Raffaele University, Nuclear Medicine Unit San Raffaele Hospital, Milan, Italy
| | | | - Sven Haller
- CIMC - Centre d'Imagerie Médicale de Cornavin, Geneva, Switzerland
- Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Marie-Louise Montandon
- Department of Rehabilitation and Geriatrics, Geneva University Hospitals and University of Geneva, Geneva, Switzerland
| | - Cristelle Rodriguez
- Division of Institutional Measures, Medical Direction, Geneva University Hospitals, Geneva, Switzerland
| | - Panteleimon Giannakopoulos
- Division of Institutional Measures, Medical Direction, Geneva University Hospitals, Geneva, Switzerland
- Department of Psychiatry, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Valentina Garibotto
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
- Laboratory of Neuroimaging and Innovative Molecular Tracers (NIMTlab), Geneva University Neurocenter and Faculty of Medicine, University of Geneva, Geneva, Switzerland
- CIBM Center for Biomedical Imaging, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
- University Research and Innovation Center, Óbudabuda University, Budapest, Hungary.
| |
Collapse
|
6
|
Shi X, Li B, Wang W, Qin Y, Wang H, Wang X. EEG-VTTCNet: A loss joint training model based on the vision transformer and the temporal convolution network for EEG-based motor imagery classification. Neuroscience 2024; 556:42-51. [PMID: 39103043 DOI: 10.1016/j.neuroscience.2024.07.051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Revised: 07/08/2024] [Accepted: 07/31/2024] [Indexed: 08/07/2024]
Abstract
Brain-computer interface (BCI) is a technology that directly connects signals between the human brain and a computer or other external device. Motor imagery electroencephalographic (MI-EEG) signals are considered a promising paradigm for BCI systems, with a wide range of potential applications in medical rehabilitation, human-computer interaction, and virtual reality. Accurate decoding of MI-EEG signals poses a significant challenge due to issues related to the quality of the collected EEG data and subject variability. Therefore, developing an efficient MI-EEG decoding network is crucial and warrants research. This paper proposes a loss joint training model based on the vision transformer (VIT) and the temporal convolutional network (EEG-VTTCNet) to classify MI-EEG signals. To take advantage of multiple modules together, the EEG-VTTCNet adopts a shared convolution strategy and a dual-branching strategy. The dual-branching modules perform complementary learning and jointly train shared convolutional modules with better performance. We conducted experiments on the BCI Competition IV-2a and IV-2b datasets, and the proposed network outperformed the current state-of-the-art techniques with an accuracy of 84.58% and 90.94%, respectively, for the subject-dependent mode. In addition, we used t-SNE to visualize the features extracted by the proposed network, further demonstrating the effectiveness of the feature extraction framework. We also conducted extensive ablation and hyperparameter tuning experiments to construct a robust network architecture that can be well generalized.
Collapse
Affiliation(s)
- Xingbin Shi
- The School of Electrical Engineering, Shanghai Dianji University, Shanghai 201306, China; Intelligent Decision and Control Technology Institute, Shanghai Dianji University, Shanghai 201306, China
| | - Baojiang Li
- The School of Electrical Engineering, Shanghai Dianji University, Shanghai 201306, China; Intelligent Decision and Control Technology Institute, Shanghai Dianji University, Shanghai 201306, China.
| | - Wenlong Wang
- The School of Electrical Engineering, Shanghai Dianji University, Shanghai 201306, China; Intelligent Decision and Control Technology Institute, Shanghai Dianji University, Shanghai 201306, China
| | - Yuxin Qin
- The School of Electrical Engineering, Shanghai Dianji University, Shanghai 201306, China; Intelligent Decision and Control Technology Institute, Shanghai Dianji University, Shanghai 201306, China
| | - Haiyan Wang
- The School of Electrical Engineering, Shanghai Dianji University, Shanghai 201306, China; Intelligent Decision and Control Technology Institute, Shanghai Dianji University, Shanghai 201306, China
| | - Xichao Wang
- The School of Electrical Engineering, Shanghai Dianji University, Shanghai 201306, China; Intelligent Decision and Control Technology Institute, Shanghai Dianji University, Shanghai 201306, China
| |
Collapse
|
7
|
Rezaei B, Tay ZW, Mostufa S, Manzari ON, Azizi E, Ciannella S, Moni HEJ, Li C, Zeng M, Gómez-Pastora J, Wu K. Magnetic nanoparticles for magnetic particle imaging (MPI): design and applications. NANOSCALE 2024; 16:11802-11824. [PMID: 38809214 DOI: 10.1039/d4nr01195c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2024]
Abstract
Recent advancements in medical imaging have brought forth various techniques such as magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography (PET), and ultrasound, each contributing to improved diagnostic capabilities. Most recently, magnetic particle imaging (MPI) has become a rapidly advancing imaging modality with profound implications for medical diagnostics and therapeutics. By directly detecting the magnetization response of magnetic tracers, MPI surpasses conventional imaging modalities in sensitivity and quantifiability, particularly in stem cell tracking applications. Herein, this comprehensive review explores the fundamental principles, instrumentation, magnetic nanoparticle tracer design, and applications of MPI, offering insights into recent advancements and future directions. Novel tracer designs, such as zinc-doped iron oxide nanoparticles (Zn-IONPs), exhibit enhanced performance, broadening MPI's utility. Spatial encoding strategies, scanning trajectories, and instrumentation innovations are elucidated, illuminating the technical underpinnings of MPI's evolution. Moreover, integrating machine learning and deep learning methods enhances MPI's image processing capabilities, paving the way for more efficient segmentation, quantification, and reconstruction. The potential of superferromagnetic iron oxide nanoparticle chains (SFMIOs) as new MPI tracers further advanced the imaging quality and expanded clinical applications, underscoring the promising future of this emerging imaging modality.
Collapse
Affiliation(s)
- Bahareh Rezaei
- Department of Electrical and Computer Engineering, Texas Tech University, Lubbock, TX 79409, USA.
| | - Zhi Wei Tay
- National Institute of Advanced Industrial Science and Technology (AIST), Health and Medical Research Institute, Tsukuba, Ibaraki 305-8564, Japan
| | - Shahriar Mostufa
- Department of Electrical and Computer Engineering, Texas Tech University, Lubbock, TX 79409, USA.
| | - Omid Nejati Manzari
- Department of Electrical and Computer Engineering, Texas Tech University, Lubbock, TX 79409, USA.
| | - Ebrahim Azizi
- Department of Electrical and Computer Engineering, Texas Tech University, Lubbock, TX 79409, USA.
| | - Stefano Ciannella
- Department of Chemical Engineering, Texas Tech University, Lubbock, TX 79409, USA
| | - Hur-E-Jannat Moni
- Department of Chemical Engineering, Texas Tech University, Lubbock, TX 79409, USA
| | - Changzhi Li
- Department of Electrical and Computer Engineering, Texas Tech University, Lubbock, TX 79409, USA.
| | - Minxiang Zeng
- Department of Chemical Engineering, Texas Tech University, Lubbock, TX 79409, USA
| | | | - Kai Wu
- Department of Electrical and Computer Engineering, Texas Tech University, Lubbock, TX 79409, USA.
| |
Collapse
|
8
|
Mansouri Z, Salimi Y, Akhavanallaf A, Shiri I, Teixeira EPA, Hou X, Beauregard JM, Rahmim A, Zaidi H. Deep transformer-based personalized dosimetry from SPECT/CT images: a hybrid approach for [ 177Lu]Lu-DOTATATE radiopharmaceutical therapy. Eur J Nucl Med Mol Imaging 2024; 51:1516-1529. [PMID: 38267686 PMCID: PMC11043201 DOI: 10.1007/s00259-024-06618-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 01/15/2024] [Indexed: 01/26/2024]
Abstract
PURPOSE Accurate dosimetry is critical for ensuring the safety and efficacy of radiopharmaceutical therapies. In current clinical dosimetry practice, MIRD formalisms are widely employed. However, with the rapid advancement of deep learning (DL) algorithms, there has been an increasing interest in leveraging the calculation speed and automation capabilities for different tasks. We aimed to develop a hybrid transformer-based deep learning (DL) model that incorporates a multiple voxel S-value (MSV) approach for voxel-level dosimetry in [177Lu]Lu-DOTATATE therapy. The goal was to enhance the performance of the model to achieve accuracy levels closely aligned with Monte Carlo (MC) simulations, considered as the standard of reference. We extended our analysis to include MIRD formalisms (SSV and MSV), thereby conducting a comprehensive dosimetry study. METHODS We used a dataset consisting of 22 patients undergoing up to 4 cycles of [177Lu]Lu-DOTATATE therapy. MC simulations were used to generate reference absorbed dose maps. In addition, MIRD formalism approaches, namely, single S-value (SSV) and MSV techniques, were performed. A UNEt TRansformer (UNETR) DL architecture was trained using five-fold cross-validation to generate MC-based dose maps. Co-registered CT images were fed into the network as input, whereas the difference between MC and MSV (MC-MSV) was set as output. DL results are then integrated to MSV to revive the MC dose maps. Finally, the dose maps generated by MSV, SSV, and DL were quantitatively compared to the MC reference at both voxel level and organ level (organs at risk and lesions). RESULTS The DL approach showed slightly better performance (voxel relative absolute error (RAE) = 5.28 ± 1.32) compared to MSV (voxel RAE = 5.54 ± 1.4) and outperformed SSV (voxel RAE = 7.8 ± 3.02). Gamma analysis pass rates were 99.0 ± 1.2%, 98.8 ± 1.3%, and 98.7 ± 1.52% for DL, MSV, and SSV approaches, respectively. The computational time for MC was the highest (~2 days for a single-bed SPECT study) compared to MSV, SSV, and DL, whereas the DL-based approach outperformed the other approaches in terms of time efficiency (3 s for a single-bed SPECT). Organ-wise analysis showed absolute percent errors of 1.44 ± 3.05%, 1.18 ± 2.65%, and 1.15 ± 2.5% for SSV, MSV, and DL approaches, respectively, in lesion-absorbed doses. CONCLUSION A hybrid transformer-based deep learning model was developed for fast and accurate dose map generation, outperforming the MIRD approaches, specifically in heterogenous regions. The model achieved accuracy close to MC gold standard and has potential for clinical implementation for use on large-scale datasets.
Collapse
Affiliation(s)
- Zahra Mansouri
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Azadeh Akhavanallaf
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Eliluane Pirazzo Andrade Teixeira
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Xinchi Hou
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
| | - Jean-Mathieu Beauregard
- Cancer Research Centre and Department of Radiology and Nuclear Medicine, Université Laval, Quebec City, QC, Canada
| | - Arman Rahmim
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland.
- Department of Nuclear Medicine, University Medical Center Groningen, University of Groningen, 9700 RB, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, DK-500, Odense, Denmark.
- University Research and Innovation Center, Óbuda University, Budapest, Hungary.
| |
Collapse
|
9
|
Ma K, Chen KZ, Qiao SL. Advances of Layered Double Hydroxide-Based Materials for Tumor Imaging and Therapy. CHEM REC 2024; 24:e202400010. [PMID: 38501833 DOI: 10.1002/tcr.202400010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Revised: 02/22/2024] [Indexed: 03/20/2024]
Abstract
Layered double hydroxides (LDH) are a class of functional anionic clays that typically consist of orthorhombic arrays of metal hydroxides with anions sandwiched between the layers. Due to their unique properties, including high chemical stability, good biocompatibility, controlled drug loading, and enhanced drug bioavailability, LDHs have many potential applications in the medical field. Especially in the fields of bioimaging and tumor therapy. This paper reviews the research progress of LDHs and their nanocomposites in the field of tumor imaging and therapy. First, the structure and advantages of LDH are discussed. Then, several commonly used methods for the preparation of LDH are presented, including co-precipitation, hydrothermal and ion exchange methods. Subsequently, recent advances in layered hydroxides and their nanocomposites for cancer imaging and therapy are highlighted. Finally, based on current research, we summaries the prospects and challenges of layered hydroxides and nanocomposites for cancer diagnosis and therapy.
Collapse
Affiliation(s)
- Ke Ma
- Lab of Functional and Biomedical Nanomaterials, College of Materials Science and Engineering, Qingdao University of Science and Technology (QUST), Qingdao, 266042, P. R. China
| | - Ke-Zheng Chen
- Lab of Functional and Biomedical Nanomaterials, College of Materials Science and Engineering, Qingdao University of Science and Technology (QUST), Qingdao, 266042, P. R. China
| | - Sheng-Lin Qiao
- Lab of Functional and Biomedical Nanomaterials, College of Materials Science and Engineering, Qingdao University of Science and Technology (QUST), Qingdao, 266042, P. R. China
| |
Collapse
|
10
|
Yang X, Chin BB, Silosky M, Wehrend J, Litwiller DV, Ghosh D, Xing F. Learning Without Real Data Annotations to Detect Hepatic Lesions in PET Images. IEEE Trans Biomed Eng 2024; 71:679-688. [PMID: 37708016 DOI: 10.1109/tbme.2023.3315268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/16/2023]
Abstract
OBJECTIVE Deep neural networks have been recently applied to lesion identification in fluorodeoxyglucose (FDG) positron emission tomography (PET) images, but they typically rely on a large amount of well-annotated data for model training. This is extremely difficult to achieve for neuroendocrine tumors (NETs), because of low incidence of NETs and expensive lesion annotation in PET images. The objective of this study is to design a novel, adaptable deep learning method, which uses no real lesion annotations but instead low-cost, list mode-simulated data, for hepatic lesion detection in real-world clinical NET PET images. METHODS We first propose a region-guided generative adversarial network (RG-GAN) for lesion-preserved image-to-image translation. Then, we design a specific data augmentation module for our list-mode simulated data and incorporate this module into the RG-GAN to improve model training. Finally, we combine the RG-GAN, the data augmentation module and a lesion detection neural network into a unified framework for joint-task learning to adaptatively identify lesions in real-world PET data. RESULTS The proposed method outperforms recent state-of-the-art lesion detection methods in real clinical 68Ga-DOTATATE PET images, and produces very competitive performance with the target model that is trained with real lesion annotations. CONCLUSION With RG-GAN modeling and specific data augmentation, we can obtain good lesion detection performance without using any real data annotations. SIGNIFICANCE This study introduces an adaptable deep learning method for hepatic lesion identification in NETs, which can significantly reduce human effort for data annotation and improve model generalizability for lesion detection with PET imaging.
Collapse
|
11
|
Izadi S, Shiri I, F Uribe C, Geramifar P, Zaidi H, Rahmim A, Hamarneh G. Enhanced direct joint attenuation and scatter correction of whole-body PET images via context-aware deep networks. Z Med Phys 2024:S0939-3889(24)00002-3. [PMID: 38302292 DOI: 10.1016/j.zemedi.2024.01.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 12/24/2023] [Accepted: 01/10/2024] [Indexed: 02/03/2024]
Abstract
In positron emission tomography (PET), attenuation and scatter corrections are necessary steps toward accurate quantitative reconstruction of the radiopharmaceutical distribution. Inspired by recent advances in deep learning, many algorithms based on convolutional neural networks have been proposed for automatic attenuation and scatter correction, enabling applications to CT-less or MR-less PET scanners to improve performance in the presence of CT-related artifacts. A known characteristic of PET imaging is to have varying tracer uptakes for various patients and/or anatomical regions. However, existing deep learning-based algorithms utilize a fixed model across different subjects and/or anatomical regions during inference, which could result in spurious outputs. In this work, we present a novel deep learning-based framework for the direct reconstruction of attenuation and scatter-corrected PET from non-attenuation-corrected images in the absence of structural information in the inference. To deal with inter-subject and intra-subject uptake variations in PET imaging, we propose a novel model to perform subject- and region-specific filtering through modulating the convolution kernels in accordance to the contextual coherency within the neighboring slices. This way, the context-aware convolution can guide the composition of intermediate features in favor of regressing input-conditioned and/or region-specific tracer uptakes. We also utilized a large cohort of 910 whole-body studies for training and evaluation purposes, which is more than one order of magnitude larger than previous works. In our experimental studies, qualitative assessments showed that our proposed CT-free method is capable of producing corrected PET images that accurately resemble ground truth images corrected with the aid of CT scans. For quantitative assessments, we evaluated our proposed method over 112 held-out subjects and achieved an absolute relative error of 14.30±3.88% and a relative error of -2.11%±2.73% in whole-body.
Collapse
Affiliation(s)
- Saeed Izadi
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Canada
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Geneva, Switzerland; Department of Cardiology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.
| | - Carlos F Uribe
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, Canada; Department of Radiology, University of British Columbia, Vancouver, Canada; Molecular Imaging and Therapy, BC Cancer, Vancouver, BC, Canada
| | - Parham Geramifar
- Research Center for Nuclear Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark; University Research and Innovation Center, Óbuda University, Budapest, Hungary
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, Canada; Department of Radiology, University of British Columbia, Vancouver, Canada; Department of Physics and Astronomy, University of British Columbia, Vancouver, Canada
| | - Ghassan Hamarneh
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Canada.
| |
Collapse
|
12
|
Shiri I, Amini M, Yousefirizi F, Vafaei Sadr A, Hajianfar G, Salimi Y, Mansouri Z, Jenabi E, Maghsudi M, Mainta I, Becker M, Rahmim A, Zaidi H. Information fusion for fully automated segmentation of head and neck tumors from PET and CT images. Med Phys 2024; 51:319-333. [PMID: 37475591 DOI: 10.1002/mp.16615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 05/16/2023] [Accepted: 06/19/2023] [Indexed: 07/22/2023] Open
Abstract
BACKGROUND PET/CT images combining anatomic and metabolic data provide complementary information that can improve clinical task performance. PET image segmentation algorithms exploiting the multi-modal information available are still lacking. PURPOSE Our study aimed to assess the performance of PET and CT image fusion for gross tumor volume (GTV) segmentations of head and neck cancers (HNCs) utilizing conventional, deep learning (DL), and output-level voting-based fusions. METHODS The current study is based on a total of 328 histologically confirmed HNCs from six different centers. The images were automatically cropped to a 200 × 200 head and neck region box, and CT and PET images were normalized for further processing. Eighteen conventional image-level fusions were implemented. In addition, a modified U2-Net architecture as DL fusion model baseline was used. Three different input, layer, and decision-level information fusions were used. Simultaneous truth and performance level estimation (STAPLE) and majority voting to merge different segmentation outputs (from PET and image-level and network-level fusions), that is, output-level information fusion (voting-based fusions) were employed. Different networks were trained in a 2D manner with a batch size of 64. Twenty percent of the dataset with stratification concerning the centers (20% in each center) were used for final result reporting. Different standard segmentation metrics and conventional PET metrics, such as SUV, were calculated. RESULTS In single modalities, PET had a reasonable performance with a Dice score of 0.77 ± 0.09, while CT did not perform acceptably and reached a Dice score of only 0.38 ± 0.22. Conventional fusion algorithms obtained a Dice score range of [0.76-0.81] with guided-filter-based context enhancement (GFCE) at the low-end, and anisotropic diffusion and Karhunen-Loeve transform fusion (ADF), multi-resolution singular value decomposition (MSVD), and multi-level image decomposition based on latent low-rank representation (MDLatLRR) at the high-end. All DL fusion models achieved Dice scores of 0.80. Output-level voting-based models outperformed all other models, achieving superior results with a Dice score of 0.84 for Majority_ImgFus, Majority_All, and Majority_Fast. A mean error of almost zero was achieved for all fusions using SUVpeak , SUVmean and SUVmedian . CONCLUSION PET/CT information fusion adds significant value to segmentation tasks, considerably outperforming PET-only and CT-only methods. In addition, both conventional image-level and DL fusions achieve competitive results. Meanwhile, output-level voting-based fusion using majority voting of several algorithms results in statistically significant improvements in the segmentation of HNC.
Collapse
Affiliation(s)
- Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Mehdi Amini
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, British Columbia, Canada
| | - Alireza Vafaei Sadr
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
- Department of Public Health Sciences, College of Medicine, The Pennsylvania State University, Hershey, USA
| | - Ghasem Hajianfar
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Zahra Mansouri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Elnaz Jenabi
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Mehdi Maghsudi
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Ismini Mainta
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Minerva Becker
- Service of Radiology, Geneva University Hospital, Geneva, Switzerland
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, British Columbia, Canada
- Department of Radiology and Physics, University of British Columbia, Vancouver, Canada
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
- Geneva University Neurocenter, Geneva University, Geneva, Switzerland
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
13
|
Shiri I, Salimi Y, Maghsudi M, Jenabi E, Harsini S, Razeghi B, Mostafaei S, Hajianfar G, Sanaat A, Jafari E, Samimi R, Khateri M, Sheikhzadeh P, Geramifar P, Dadgar H, Bitrafan Rajabi A, Assadi M, Bénard F, Vafaei Sadr A, Voloshynovskiy S, Mainta I, Uribe C, Rahmim A, Zaidi H. Differential privacy preserved federated transfer learning for multi-institutional 68Ga-PET image artefact detection and disentanglement. Eur J Nucl Med Mol Imaging 2023; 51:40-53. [PMID: 37682303 PMCID: PMC10684636 DOI: 10.1007/s00259-023-06418-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 08/24/2023] [Indexed: 09/09/2023]
Abstract
PURPOSE Image artefacts continue to pose challenges in clinical molecular imaging, resulting in misdiagnoses, additional radiation doses to patients and financial costs. Mismatch and halo artefacts occur frequently in gallium-68 (68Ga)-labelled compounds whole-body PET/CT imaging. Correcting for these artefacts is not straightforward and requires algorithmic developments, given that conventional techniques have failed to address them adequately. In the current study, we employed differential privacy-preserving federated transfer learning (FTL) to manage clinical data sharing and tackle privacy issues for building centre-specific models that detect and correct artefacts present in PET images. METHODS Altogether, 1413 patients with 68Ga prostate-specific membrane antigen (PSMA)/DOTA-TATE (TOC) PET/CT scans from 3 countries, including 8 different centres, were enrolled in this study. CT-based attenuation and scatter correction (CT-ASC) was used in all centres for quantitative PET reconstruction. Prior to model training, an experienced nuclear medicine physician reviewed all images to ensure the use of high-quality, artefact-free PET images (421 patients' images). A deep neural network (modified U2Net) was trained on 80% of the artefact-free PET images to utilize centre-based (CeBa), centralized (CeZe) and the proposed differential privacy FTL frameworks. Quantitative analysis was performed in 20% of the clean data (with no artefacts) in each centre. A panel of two nuclear medicine physicians conducted qualitative assessment of image quality, diagnostic confidence and image artefacts in 128 patients with artefacts (256 images for CT-ASC and FTL-ASC). RESULTS The three approaches investigated in this study for 68Ga-PET imaging (CeBa, CeZe and FTL) resulted in a mean absolute error (MAE) of 0.42 ± 0.21 (CI 95%: 0.38 to 0.47), 0.32 ± 0.23 (CI 95%: 0.27 to 0.37) and 0.28 ± 0.15 (CI 95%: 0.25 to 0.31), respectively. Statistical analysis using the Wilcoxon test revealed significant differences between the three approaches, with FTL outperforming CeBa and CeZe (p-value < 0.05) in the clean test set. The qualitative assessment demonstrated that FTL-ASC significantly improved image quality and diagnostic confidence and decreased image artefacts, compared to CT-ASC in 68Ga-PET imaging. In addition, mismatch and halo artefacts were successfully detected and disentangled in the chest, abdomen and pelvic regions in 68Ga-PET imaging. CONCLUSION The proposed approach benefits from using large datasets from multiple centres while preserving patient privacy. Qualitative assessment by nuclear medicine physicians showed that the proposed model correctly addressed two main challenging artefacts in 68Ga-PET imaging. This technique could be integrated in the clinic for 68Ga-PET imaging artefact detection and disentanglement using multicentric heterogeneous datasets.
Collapse
Affiliation(s)
- Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
- Department of Cardiology, Inselspital, University of Bern, Bern, Switzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Mehdi Maghsudi
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Elnaz Jenabi
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Sara Harsini
- BC Cancer Research Institute, Vancouver, BC, Canada
| | - Behrooz Razeghi
- Department of Computer Science, University of Geneva, Geneva, Switzerland
| | - Shayan Mostafaei
- Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden
- Department of Medical Epidemiology and Biostatistics, Karolinska Institute, Stockholm, Sweden
| | - Ghasem Hajianfar
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Esmail Jafari
- The Persian Gulf Nuclear Medicine Research Center, Department of Nuclear Medicine, Molecular Imaging, and Theranostics, Bushehr Medical University Hospital, School of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran
| | - Rezvan Samimi
- Department of Medical Radiation Engineering, Shahid Beheshti University, Tehran, Iran
| | - Maziar Khateri
- Department of Medical Radiation Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Peyman Sheikhzadeh
- Department of Nuclear Medicine, Imam Khomeini Hospital Complex, Tehran University of Medical Sciences, Tehran, Iran
| | - Parham Geramifar
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Habibollah Dadgar
- Cancer Research Center, Razavi Hospital, Imam Reza International University, Mashhad, Iran
| | - Ahmad Bitrafan Rajabi
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
- Echocardiography Research Center, Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Majid Assadi
- The Persian Gulf Nuclear Medicine Research Center, Department of Nuclear Medicine, Molecular Imaging, and Theranostics, Bushehr Medical University Hospital, School of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran
| | - François Bénard
- BC Cancer Research Institute, Vancouver, BC, Canada
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
| | - Alireza Vafaei Sadr
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
- Department of Public Health Sciences, College of Medicine, The Pennsylvania State University, Hershey, PA, 17033, USA
| | | | - Ismini Mainta
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Carlos Uribe
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
- Molecular Imaging and Therapy, BC Cancer, Vancouver, BC, Canada
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada
| | - Arman Rahmim
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada
- Department of Physics and Astronomy, University of British Columbia, Vancouver, Canada
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland.
- Geneva University Neuro Center, Geneva University, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
14
|
Shiri I, Salimi Y, Hervier E, Pezzoni A, Sanaat A, Mostafaei S, Rahmim A, Mainta I, Zaidi H. Artificial Intelligence-Driven Single-Shot PET Image Artifact Detection and Disentanglement: Toward Routine Clinical Image Quality Assurance. Clin Nucl Med 2023; 48:1035-1046. [PMID: 37883015 PMCID: PMC10662584 DOI: 10.1097/rlu.0000000000004912] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 08/22/2023] [Indexed: 10/27/2023]
Abstract
PURPOSE Medical imaging artifacts compromise image quality and quantitative analysis and might confound interpretation and misguide clinical decision-making. The present work envisions and demonstrates a new paradigm PET image Quality Assurance NETwork (PET-QA-NET) in which various image artifacts are detected and disentangled from images without prior knowledge of a standard of reference or ground truth for routine PET image quality assurance. METHODS The network was trained and evaluated using training/validation/testing data sets consisting of 669/100/100 artifact-free oncological 18 F-FDG PET/CT images and subsequently fine-tuned and evaluated on 384 (20% for fine-tuning) scans from 8 different PET centers. The developed DL model was quantitatively assessed using various image quality metrics calculated for 22 volumes of interest defined on each scan. In addition, 200 additional 18 F-FDG PET/CT scans (this time with artifacts), generated using both CT-based attenuation and scatter correction (routine PET) and PET-QA-NET, were blindly evaluated by 2 nuclear medicine physicians for the presence of artifacts, diagnostic confidence, image quality, and the number of lesions detected in different body regions. RESULTS Across the volumes of interest of 100 patients, SUV MAE values of 0.13 ± 0.04, 0.24 ± 0.1, and 0.21 ± 0.06 were reached for SUV mean , SUV max , and SUV peak , respectively (no statistically significant difference). Qualitative assessment showed a general trend of improved image quality and diagnostic confidence and reduced image artifacts for PET-QA-NET compared with routine CT-based attenuation and scatter correction. CONCLUSION We developed a highly effective and reliable quality assurance tool that can be embedded routinely to detect and correct for 18 F-FDG PET image artifacts in clinical setting with notably improved PET image quality and quantitative capabilities.
Collapse
Affiliation(s)
- Isaac Shiri
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva
- Department of Cardiology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Yazdan Salimi
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva
| | - Elsa Hervier
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva
| | - Agathe Pezzoni
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva
| | - Amirhossein Sanaat
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva
| | - Shayan Mostafaei
- Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society
- Department of Medical Epidemiology and Biostatistics, Karolinska Institute, Stockholm, Sweden
| | - Arman Rahmim
- Departments of Radiology and Physics, University of British Columbia
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, British Columbia, Canada
| | - Ismini Mainta
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva
| | - Habib Zaidi
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva
- Geneva University Neuro Center, Geneva University, Geneva, Switzerland
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
15
|
Arabi H, Zaidi H. Recent Advances in Positron Emission Tomography/Magnetic Resonance Imaging Technology. Magn Reson Imaging Clin N Am 2023; 31:503-515. [PMID: 37741638 DOI: 10.1016/j.mric.2023.06.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/25/2023]
Abstract
More than a decade has passed since the clinical deployment of the first commercial whole-body hybrid PET/MR scanner in the clinic. The major advantages and limitations of this technology have been investigated from technical and medical perspectives. Despite the remarkable advantages associated with hybrid PET/MR imaging, such as reduced radiation dose and fully simultaneous functional and structural imaging, this technology faced major challenges in terms of mutual interference between MRI and PET components, in addition to the complexity of achieving quantitative imaging owing to the intricate MRI-guided attenuation correction in PET/MRI. In this review, the latest technical developments in PET/MRI technology as well as the state-of-the-art solutions to the major challenges of quantitative PET/MR imaging are discussed.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva 4 CH-1211, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva 4 CH-1211, Switzerland; Geneva University Neurocenter, Geneva University, Geneva CH-1205, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen 9700 RB, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense 500, Denmark.
| |
Collapse
|
16
|
Gu F, Wu Q. Quantitation of dynamic total-body PET imaging: recent developments and future perspectives. Eur J Nucl Med Mol Imaging 2023; 50:3538-3557. [PMID: 37460750 PMCID: PMC10547641 DOI: 10.1007/s00259-023-06299-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 06/05/2023] [Indexed: 10/04/2023]
Abstract
BACKGROUND Positron emission tomography (PET) scanning is an important diagnostic imaging technique used in disease diagnosis, therapy planning, treatment monitoring, and medical research. The standardized uptake value (SUV) obtained at a single time frame has been widely employed in clinical practice. Well beyond this simple static measure, more detailed metabolic information can be recovered from dynamic PET scans, followed by the recovery of arterial input function and application of appropriate tracer kinetic models. Many efforts have been devoted to the development of quantitative techniques over the last couple of decades. CHALLENGES The advent of new-generation total-body PET scanners characterized by ultra-high sensitivity and long axial field of view, i.e., uEXPLORER (United Imaging Healthcare), PennPET Explorer (University of Pennsylvania), and Biograph Vision Quadra (Siemens Healthineers), further stimulates valuable inspiration to derive kinetics for multiple organs simultaneously. But some emerging issues also need to be addressed, e.g., the large-scale data size and organ-specific physiology. The direct implementation of classical methods for total-body PET imaging without proper validation may lead to less accurate results. CONCLUSIONS In this contribution, the published dynamic total-body PET datasets are outlined, and several challenges/opportunities for quantitation of such types of studies are presented. An overview of the basic equation, calculation of input function (based on blood sampling, image, population or mathematical model), and kinetic analysis encompassing parametric (compartmental model, graphical plot and spectral analysis) and non-parametric (B-spline and piece-wise basis elements) approaches is provided. The discussion mainly focuses on the feasibilities, recent developments, and future perspectives of these methodologies for a diverse-tissue environment.
Collapse
Affiliation(s)
- Fengyun Gu
- School of Mathematics and Physics, North China Electric Power University, 102206, Beijing, China.
- School of Mathematical Sciences, University College Cork, T12XF62, Cork, Ireland.
| | - Qi Wu
- School of Mathematical Sciences, University College Cork, T12XF62, Cork, Ireland
| |
Collapse
|
17
|
Sanaat A, Shooli H, Böhringer AS, Sadeghi M, Shiri I, Salimi Y, Ginovart N, Garibotto V, Arabi H, Zaidi H. A cycle-consistent adversarial network for brain PET partial volume correction without prior anatomical information. Eur J Nucl Med Mol Imaging 2023; 50:1881-1896. [PMID: 36808000 PMCID: PMC10199868 DOI: 10.1007/s00259-023-06152-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 02/12/2023] [Indexed: 02/23/2023]
Abstract
PURPOSE Partial volume effect (PVE) is a consequence of the limited spatial resolution of PET scanners. PVE can cause the intensity values of a particular voxel to be underestimated or overestimated due to the effect of surrounding tracer uptake. We propose a novel partial volume correction (PVC) technique to overcome the adverse effects of PVE on PET images. METHODS Two hundred and twelve clinical brain PET scans, including 50 18F-Fluorodeoxyglucose (18F-FDG), 50 18F-Flortaucipir, 36 18F-Flutemetamol, and 76 18F-FluoroDOPA, and their corresponding T1-weighted MR images were enrolled in this study. The Iterative Yang technique was used for PVC as a reference or surrogate of the ground truth for evaluation. A cycle-consistent adversarial network (CycleGAN) was trained to directly map non-PVC PET images to PVC PET images. Quantitative analysis using various metrics, including structural similarity index (SSIM), root mean squared error (RMSE), and peak signal-to-noise ratio (PSNR), was performed. Furthermore, voxel-wise and region-wise-based correlations of activity concentration between the predicted and reference images were evaluated through joint histogram and Bland and Altman analysis. In addition, radiomic analysis was performed by calculating 20 radiomic features within 83 brain regions. Finally, a voxel-wise two-sample t-test was used to compare the predicted PVC PET images with reference PVC images for each radiotracer. RESULTS The Bland and Altman analysis showed the largest and smallest variance for 18F-FDG (95% CI: - 0.29, + 0.33 SUV, mean = 0.02 SUV) and 18F-Flutemetamol (95% CI: - 0.26, + 0.24 SUV, mean = - 0.01 SUV), respectively. The PSNR was lowest (29.64 ± 1.13 dB) for 18F-FDG and highest (36.01 ± 3.26 dB) for 18F-Flutemetamol. The smallest and largest SSIM were achieved for 18F-FDG (0.93 ± 0.01) and 18F-Flutemetamol (0.97 ± 0.01), respectively. The average relative error for the kurtosis radiomic feature was 3.32%, 9.39%, 4.17%, and 4.55%, while it was 4.74%, 8.80%, 7.27%, and 6.81% for NGLDM_contrast feature for 18F-Flutemetamol, 18F-FluoroDOPA, 18F-FDG, and 18F-Flortaucipir, respectively. CONCLUSION An end-to-end CycleGAN PVC method was developed and evaluated. Our model generates PVC images from the original non-PVC PET images without requiring additional anatomical information, such as MRI or CT. Our model eliminates the need for accurate registration or segmentation or PET scanner system response characterization. In addition, no assumptions regarding anatomical structure size, homogeneity, boundary, or background level are required.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Hossein Shooli
- Persian Gulf Nuclear Medicine Research Center, Department of Molecular Imaging and Radionuclide Therapy (MIRT), Bushehr Medical University Hospital, Faculty of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran
| | - Andrew Stephen Böhringer
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Maryam Sadeghi
- Department of Medical Statistics, Informatics and Health Economics, Medical University of Innsbruck, Schoepfstr. 41, Innsbruck, Austria
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Nathalie Ginovart
- Geneva University Neurocenter, University of Geneva, Geneva, Switzerland
- Department of Psychiatry, Geneva University, Geneva, Switzerland
- Department of Basic Neuroscience, Geneva University, Geneva, Switzerland
| | - Valentina Garibotto
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
- Geneva University Neurocenter, University of Geneva, Geneva, Switzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland.
- Geneva University Neurocenter, University of Geneva, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
18
|
George A, Kim DN, Moser T, Gildea IT, Evans JE, Cheung MS. Graph identification of proteins in tomograms (GRIP-Tomo). Protein Sci 2023; 32:e4538. [PMID: 36482866 PMCID: PMC9798246 DOI: 10.1002/pro.4538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 11/23/2022] [Accepted: 12/03/2022] [Indexed: 12/14/2022]
Abstract
In this study, we present a method of pattern mining based on network theory that enables the identification of protein structures or complexes from synthetic volume densities, without the knowledge of predefined templates or human biases for refinement. We hypothesized that the topological connectivity of protein structures is invariant, and they are distinctive for the purpose of protein identification from distorted data presented in volume densities. Three-dimensional densities of a protein or a complex from simulated tomographic volumes were transformed into mathematical graphs as observables. We systematically introduced data distortion or defects such as missing fullness of data, the tumbling effect, and the missing wedge effect into the simulated volumes, and varied the distance cutoffs in pixels to capture the varying connectivity between the density cluster centroids in the presence of defects. A similarity score between the graphs from the simulated volumes and the graphs transformed from the physical protein structures in point data was calculated by comparing their network theory order parameters including node degrees, betweenness centrality, and graph densities. By capturing the essential topological features defining the heterogeneous morphologies of a network, we were able to accurately identify proteins and homo-multimeric complexes from 10 topologically distinctive samples without realistic noise added. Our approach empowers future developments of tomogram processing by providing pattern mining with interpretability, to enable the classification of single-domain protein native topologies as well as distinct single-domain proteins from multimeric complexes within noisy volumes.
Collapse
Affiliation(s)
- August George
- Environmental Molecular Sciences LaboratoryPacific Northwest National LaboratoryRichlandWashingtonUSA
- Department of Biomedical EngineeringOregon Health & Science UniversityPortlandOregonUSA
| | - Doo Nam Kim
- Biological Science DivisionPacific Northwest National LaboratoryRichlandWashingtonUSA
| | - Trevor Moser
- Environmental Molecular Sciences LaboratoryPacific Northwest National LaboratoryRichlandWashingtonUSA
| | - Ian T. Gildea
- Environmental Molecular Sciences LaboratoryPacific Northwest National LaboratoryRichlandWashingtonUSA
| | - James E. Evans
- Environmental Molecular Sciences LaboratoryPacific Northwest National LaboratoryRichlandWashingtonUSA
- School of Biological SciencesWashington State UniversityPullmanWashingtonUSA
| | - Margaret S. Cheung
- Environmental Molecular Sciences LaboratoryPacific Northwest National LaboratoryRichlandWashingtonUSA
- Department of PhysicsUniversity of WashingtonSeattleWashingtonUSA
| |
Collapse
|
19
|
Sanaat A, Akhavanalaf A, Shiri I, Salimi Y, Arabi H, Zaidi H. Deep-TOF-PET: Deep learning-guided generation of time-of-flight from non-TOF brain PET images in the image and projection domains. Hum Brain Mapp 2022; 43:5032-5043. [PMID: 36087092 PMCID: PMC9582376 DOI: 10.1002/hbm.26068] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 08/18/2022] [Indexed: 11/12/2022] Open
Abstract
We aim to synthesize brain time-of-flight (TOF) PET images/sinograms from their corresponding non-TOF information in the image space (IS) and sinogram space (SS) to increase the signal-to-noise ratio (SNR) and contrast of abnormalities, and decrease the bias in tracer uptake quantification. One hundred forty clinical brain 18 F-FDG PET/CT scans were collected to generate TOF and non-TOF sinograms. The TOF sinograms were split into seven time bins (0, ±1, ±2, ±3). The predicted TOF sinogram was reconstructed and the performance of both models (IS and SS) compared with reference TOF and non-TOF. Wide-ranging quantitative and statistical analysis metrics, including structural similarity index metric (SSIM), root mean square error (RMSE), as well as 28 radiomic features for 83 brain regions were extracted to evaluate the performance of the CycleGAN model. SSIM and RMSE of 0.99 ± 0.03, 0.98 ± 0.02 and 0.12 ± 0.09, 0.16 ± 0.04 were achieved for the generated TOF-PET images in IS and SS, respectively. They were 0.97 ± 0.03 and 0.22 ± 0.12, respectively, for non-TOF-PET images. The Bland & Altman analysis revealed that the lowest tracer uptake value bias (-0.02%) and minimum variance (95% CI: -0.17%, +0.21%) were achieved for TOF-PET images generated in IS. For malignant lesions, the contrast in the test dataset was enhanced from 3.22 ± 2.51 for non-TOF to 3.34 ± 0.41 and 3.65 ± 3.10 for TOF PET in SS and IS, respectively. The implemented CycleGAN is capable of generating TOF from non-TOF PET images to achieve better image quality.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Azadeh Akhavanalaf
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
- Geneva University Neurocenter, Geneva UniversityGenevaSwitzerland
- Department of Nuclear Medicine and Molecular ImagingUniversity of Groningen, University Medical Center GroningenGroningenNetherlands
- Department of Nuclear MedicineUniversity of Southern DenmarkOdenseDenmark
| |
Collapse
|
20
|
|
21
|
Zaker N, Haddad K, Faghihi R, Arabi H, Zaidi H. Direct inference of Patlak parametric images in whole-body PET/CT imaging using convolutional neural networks. Eur J Nucl Med Mol Imaging 2022; 49:4048-4063. [PMID: 35716176 PMCID: PMC9525418 DOI: 10.1007/s00259-022-05867-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Accepted: 06/09/2022] [Indexed: 11/20/2022]
Abstract
Purpose This study proposed and investigated the feasibility of estimating Patlak-derived influx rate constant (Ki) from standardized uptake value (SUV) and/or dynamic PET image series. Methods Whole-body 18F-FDG dynamic PET images of 19 subjects consisting of 13 frames or passes were employed for training a residual deep learning model with SUV and/or dynamic series as input and Ki-Patlak (slope) images as output. The training and evaluation were performed using a nine-fold cross-validation scheme. Owing to the availability of SUV images acquired 60 min post-injection (20 min total acquisition time), the data sets used for the training of the models were split into two groups: “With SUV” and “Without SUV.” For “With SUV” group, the model was first trained using only SUV images and then the passes (starting from pass 13, the last pass, to pass 9) were added to the training of the model (one pass each time). For this group, 6 models were developed with input data consisting of SUV, SUV plus pass 13, SUV plus passes 13 and 12, SUV plus passes 13 to 11, SUV plus passes 13 to 10, and SUV plus passes 13 to 9. For the “Without SUV” group, the same trend was followed, but without using the SUV images (5 models were developed with input data of passes 13 to 9). For model performance evaluation, the mean absolute error (MAE), mean error (ME), mean relative absolute error (MRAE%), relative error (RE%), mean squared error (MSE), root mean squared error (RMSE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM) were calculated between the predicted Ki-Patlak images by the two groups and the reference Ki-Patlak images generated through Patlak analysis using the whole acquired data sets. For specific evaluation of the method, regions of interest (ROIs) were drawn on representative organs, including the lung, liver, brain, and heart and around the identified malignant lesions. Results The MRAE%, RE%, PSNR, and SSIM indices across all patients were estimated as 7.45 ± 0.94%, 4.54 ± 2.93%, 46.89 ± 2.93, and 1.00 ± 6.7 × 10−7, respectively, for models predicted using SUV plus passes 13 to 9 as input. The predicted parameters using passes 13 to 11 as input exhibited almost similar results compared to the predicted models using SUV plus passes 13 to 9 as input. Yet, the bias was continuously reduced by adding passes until pass 11, after which the magnitude of error reduction was negligible. Hence, the predicted model with SUV plus passes 13 to 9 had the lowest quantification bias. Lesions invisible in one or both of SUV and Ki-Patlak images appeared similarly through visual inspection in the predicted images with tolerable bias. Conclusion This study concluded the feasibility of direct deep learning-based approach to estimate Ki-Patlak parametric maps without requiring the input function and with a fewer number of passes. This would lead to shorter acquisition times for WB dynamic imaging with acceptable bias and comparable lesion detectability performance. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-022-05867-w.
Collapse
Affiliation(s)
- Neda Zaker
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland.,School of Mechanical Engineering, Department of Nuclear Engineering, Shiraz University, Shiraz, Iran
| | - Kamal Haddad
- School of Mechanical Engineering, Department of Nuclear Engineering, Shiraz University, Shiraz, Iran
| | - Reza Faghihi
- School of Mechanical Engineering, Department of Nuclear Engineering, Shiraz University, Shiraz, Iran
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland. .,Geneva University Neurocenter, Geneva University, Geneva, Switzerland. .,Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands. .,Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
22
|
Adler SS, Seidel J, Choyke PL. Advances in Preclinical PET. Semin Nucl Med 2022; 52:382-402. [PMID: 35307164 PMCID: PMC9038721 DOI: 10.1053/j.semnuclmed.2022.02.002] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 02/11/2022] [Accepted: 02/14/2022] [Indexed: 12/18/2022]
Abstract
The classical intent of PET imaging is to obtain the most accurate estimate of the amount of positron-emitting radiotracer in the smallest possible volume element located anywhere in the imaging subject at any time using the least amount of radioactivity. Reaching this goal, however, is confounded by an enormous array of interlinked technical issues that limit imaging system performance. As a result, advances in PET, human or animal, are the result of cumulative innovations across each of the component elements of PET, from data acquisition to image analysis. In the report that follows, we trace several of these advances across the imaging process with a focus on small animal PET.
Collapse
Affiliation(s)
- Stephen S Adler
- Frederick National Laboratory for Cancer Research, Frederick, MD; Molecular Imaging Branch, National Cancer Institute, Bethesda MD
| | - Jurgen Seidel
- Contractor to Frederick National Laboratory for Cancer Research, Leidos biodical Research, Inc., Frederick, MD; Molecular Imaging Branch, National Cancer Institute, Bethesda MD
| | - Peter L Choyke
- Molecular Imaging Branch, National Cancer Institute, Bethesda MD.
| |
Collapse
|
23
|
Overall Survival Prognostic Modelling of Non-small Cell Lung Cancer Patients Using Positron Emission Tomography/Computed Tomography Harmonised Radiomics Features: The Quest for the Optimal Machine Learning Algorithm. Clin Oncol (R Coll Radiol) 2021; 34:114-127. [PMID: 34872823 DOI: 10.1016/j.clon.2021.11.014] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 11/01/2021] [Accepted: 11/17/2021] [Indexed: 02/06/2023]
Abstract
AIMS Despite the promising results achieved by radiomics prognostic models for various clinical applications, multiple challenges still need to be addressed. The two main limitations of radiomics prognostic models include information limitation owing to single imaging modalities and the selection of optimum machine learning and feature selection methods for the considered modality and clinical outcome. In this work, we applied several feature selection and machine learning methods to single-modality positron emission tomography (PET) and computed tomography (CT) and multimodality PET/CT fusion to identify the best combinations for different radiomics modalities towards overall survival prediction in non-small cell lung cancer patients. MATERIALS AND METHODS A PET/CT dataset from The Cancer Imaging Archive, including subjects from two independent institutions (87 and 95 patients), was used in this study. Each cohort was used once as training and once as a test, followed by averaging of the results. ComBat harmonisation was used to address the centre effect. In our proposed radiomics framework, apart from single-modality PET and CT models, multimodality radiomics models were developed using multilevel (feature and image levels) fusion. Two different methods were considered for the feature-level strategy, including concatenating PET and CT features into a single feature set and alternatively averaging them. For image-level fusion, we used three different fusion methods, namely wavelet fusion, guided filtering-based fusion and latent low-rank representation fusion. In the proposed prognostic modelling framework, combinations of four feature selection and seven machine learning methods were applied to all radiomics modalities (two single and five multimodalities), machine learning hyper-parameters were optimised and finally the models were evaluated in the test cohort with 1000 repetitions via bootstrapping. Feature selection and machine learning methods were selected as popular techniques in the literature, supported by open source software in the public domain and their ability to cope with continuous time-to-event survival data. Multifactor ANOVA was used to carry out variability analysis and the proportion of total variance explained by radiomics modality, feature selection and machine learning methods was calculated by a bias-corrected effect size estimate known as ω2. RESULTS Optimum feature selection and machine learning methods differed owing to the applied radiomics modality. However, minimum depth (MD) as feature selection and Lasso and Elastic-Net regularized generalized linear model (glmnet) as machine learning method had the highest average results. Results from the ANOVA test indicated that the variability that each factor (radiomics modality, feature selection and machine learning methods) introduces to the performance of models is case specific, i.e. variances differ regarding different radiomics modalities and fusion strategies. Overall, the greatest proportion of variance was explained by machine learning, except for models in feature-level fusion strategy. CONCLUSION The identification of optimal feature selection and machine learning methods is a crucial step in developing sound and accurate radiomics risk models. Furthermore, optimum methods are case specific, differing due to the radiomics modality and fusion strategy used.
Collapse
|
24
|
Amirrashedi M, Sarkar S, Mamizadeh H, Ghadiri H, Ghafarian P, Zaidi H, Ay MR. Leveraging deep neural networks to improve numerical and perceptual image quality in low-dose preclinical PET imaging. Comput Med Imaging Graph 2021; 94:102010. [PMID: 34784505 DOI: 10.1016/j.compmedimag.2021.102010] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 10/25/2021] [Accepted: 10/26/2021] [Indexed: 01/24/2023]
Abstract
The amount of radiotracer injected into laboratory animals is still the most daunting challenge facing translational PET studies. Since low-dose imaging is characterized by a higher level of noise, the quality of the reconstructed images leaves much to be desired. Being the most ubiquitous techniques in denoising applications, edge-aware denoising filters, and reconstruction-based techniques have drawn significant attention in low-count applications. However, for the last few years, much of the credit has gone to deep-learning (DL) methods, which provide more robust solutions to handle various conditions. Albeit being extensively explored in clinical studies, to the best of our knowledge, there is a lack of studies exploring the feasibility of DL-based image denoising in low-count small animal PET imaging. Therefore, herein, we investigated different DL frameworks to map low-dose small animal PET images to their full-dose equivalent with quality and visual similarity on a par with those of standard acquisition. The performance of the DL model was also compared to other well-established filters, including Gaussian smoothing, nonlocal means, and anisotropic diffusion. Visual inspection and quantitative assessment based on quality metrics proved the superior performance of the DL methods in low-count small animal PET studies, paving the way for a more detailed exploration of DL-assisted algorithms in this domain.
Collapse
Affiliation(s)
- Mahsa Amirrashedi
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Saeed Sarkar
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Hojjat Mamizadeh
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Hossein Ghadiri
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Pardis Ghafarian
- Chronic Respiratory Diseases Research Center, National Research Institute of Tuberculosis and Lung Diseases (NRITLD), Shahid Beheshti University of Medical Sciences, Tehran, Iran; PET/CT and Cyclotron Center, Masih Daneshvari Hospital, Shahid Beheshti University of Medical, Tehran, Iran.
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva CH-1211, Switzerland; Geneva University Neurocenter, Geneva University, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| | - Mohammad Reza Ay
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
25
|
Deep learning-based denoising of low-dose SPECT myocardial perfusion images: quantitative assessment and clinical performance. Eur J Nucl Med Mol Imaging 2021; 49:1508-1522. [PMID: 34778929 PMCID: PMC8940834 DOI: 10.1007/s00259-021-05614-7] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Accepted: 11/01/2021] [Indexed: 11/28/2022]
Abstract
Purpose This work was set out to investigate the feasibility of dose reduction in SPECT myocardial perfusion imaging (MPI) without sacrificing diagnostic accuracy. A deep learning approach was proposed to synthesize full-dose images from the corresponding low-dose images at different dose reduction levels in the projection space. Methods Clinical SPECT-MPI images of 345 patients acquired on a dedicated cardiac SPECT camera in list-mode format were retrospectively employed to predict standard-dose from low-dose images at half-, quarter-, and one-eighth-dose levels. To simulate realistic low-dose projections, 50%, 25%, and 12.5% of the events were randomly selected from the list-mode data through applying binomial subsampling. A generative adversarial network was implemented to predict non-gated standard-dose SPECT images in the projection space at the different dose reduction levels. Well-established metrics, including peak signal-to-noise ratio (PSNR), root mean square error (RMSE), and structural similarity index metrics (SSIM) in addition to Pearson correlation coefficient analysis and clinical parameters derived from Cedars-Sinai software were used to quantitatively assess the predicted standard-dose images. For clinical evaluation, the quality of the predicted standard-dose images was evaluated by a nuclear medicine specialist using a seven-point (− 3 to + 3) grading scheme. Results The highest PSNR (42.49 ± 2.37) and SSIM (0.99 ± 0.01) and the lowest RMSE (1.99 ± 0.63) were achieved at a half-dose level. Pearson correlation coefficients were 0.997 ± 0.001, 0.994 ± 0.003, and 0.987 ± 0.004 for the predicted standard-dose images at half-, quarter-, and one-eighth-dose levels, respectively. Using the standard-dose images as reference, the Bland–Altman plots sketched for the Cedars-Sinai selected parameters exhibited remarkably less bias and variance in the predicted standard-dose images compared with the low-dose images at all reduced dose levels. Overall, considering the clinical assessment performed by a nuclear medicine specialist, 100%, 80%, and 11% of the predicted standard-dose images were clinically acceptable at half-, quarter-, and one-eighth-dose levels, respectively. Conclusion The noise was effectively suppressed by the proposed network, and the predicted standard-dose images were comparable to reference standard-dose images at half- and quarter-dose levels. However, recovery of the underlying signals/information in low-dose images beyond a quarter of the standard dose would not be feasible (due to very poor signal-to-noise ratio) which will adversely affect the clinical interpretation of the resulting images. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-021-05614-7.
Collapse
|
26
|
Sanaat A, Shooli H, Ferdowsi S, Shiri I, Arabi H, Zaidi H. DeepTOFSino: A deep learning model for synthesizing full-dose time-of-flight bin sinograms from their corresponding low-dose sinograms. Neuroimage 2021; 245:118697. [PMID: 34742941 DOI: 10.1016/j.neuroimage.2021.118697] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2021] [Revised: 09/21/2021] [Accepted: 10/29/2021] [Indexed: 11/29/2022] Open
Abstract
PURPOSE Reducing the injected activity and/or the scanning time is a desirable goal to minimize radiation exposure and maximize patients' comfort. To achieve this goal, we developed a deep neural network (DNN) model for synthesizing full-dose (FD) time-of-flight (TOF) bin sinograms from their corresponding fast/low-dose (LD) TOF bin sinograms. METHODS Clinical brain PET/CT raw data of 140 normal and abnormal patients were employed to create LD and FD TOF bin sinograms. The LD TOF sinograms were created through 5% undersampling of FD list-mode PET data. The TOF sinograms were split into seven time bins (0, ±1, ±2, ±3). Residual network (ResNet) algorithms were trained separately to generate FD bins from LD bins. An extra ResNet model was trained to synthesize FD images from LD images to compare the performance of DNN in sinogram space (SS) vs implementation in image space (IS). Comprehensive quantitative and statistical analysis was performed to assess the performance of the proposed model using established quantitative metrics, including the peak signal-to-noise ratio (PSNR), structural similarity index metric (SSIM) region-wise standardized uptake value (SUV) bias and statistical analysis for 83 brain regions. RESULTS SSIM and PSNR values of 0.97 ± 0.01, 0.98 ± 0.01 and 33.70 ± 0.32, 39.36 ± 0.21 were obtained for IS and SS, respectively, compared to 0.86 ± 0.02and 31.12 ± 0.22 for reference LD images. The absolute average SUV bias was 0.96 ± 0.95% and 1.40 ± 0.72% for SS and IS implementations, respectively. The joint histogram analysis revealed the lowest mean square error (MSE) and highest correlation (R2 = 0.99, MSE = 0.019) was achieved by SS compared to IS (R2 = 0.97, MSE= 0.028). The Bland & Altman analysis showed that the lowest SUV bias (-0.4%) and minimum variance (95% CI: -2.6%, +1.9%) were achieved by SS images. The voxel-wise t-test analysis revealed the presence of voxels with statistically significantly lower values in LD, IS, and SS images compared to FD images respectively. CONCLUSION The results demonstrated that images reconstructed from the predicted TOF FD sinograms using the SS approach led to higher image quality and lower bias compared to images predicted from LD images.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Hossein Shooli
- Persian Gulf Nuclear Medicine Research Center, Department of Molecular Imaging and Radionuclide Therapy (MIRT), Bushehr Medical University Hospital, Faculty of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran
| | - Sohrab Ferdowsi
- University of Applied Sciences and Arts of Western, Geneva, Switzerland
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland; Geneva University Neurocenter, University of Geneva, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, Groningen, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
27
|
le Guevelou J, Achard V, Mainta I, Zaidi H, Garibotto V, Latorzeff I, Sargos P, Ménard C, Zilli T. PET/CT-Based Salvage Radiotherapy for Recurrent Prostate Cancer After Radical Prostatectomy: Impact on Treatment Management and Future Directions. Front Oncol 2021; 11:742093. [PMID: 34532294 PMCID: PMC8438304 DOI: 10.3389/fonc.2021.742093] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Accepted: 08/09/2021] [Indexed: 12/25/2022] Open
Abstract
Biochemical recurrence is a clinical situation experienced by 20 to 40% of prostate cancer patients treated with radical prostatectomy (RP). Prostate bed (PB) radiation therapy (RT) remains the mainstay salvage treatment, although it remains non-curative for up to 30% of patients developing further recurrence. Positron emission tomography with computed tomography (PET/CT) using prostate cancer-targeting radiotracers has emerged in the last decade as a new-generation imaging technique characterized by a better restaging accuracy compared to conventional imaging. By adapting targeting of recurrence sites and modulating treatment management, implementation in clinical practice of restaging PET/CT is challenging the established therapeutic standards born from randomized controlled trials. This article reviews the potential impact of restaging PET/CT on changes in the management of recurrent prostate cancer after RP. Based on PET/CT findings, it addresses potential adaptation of RT target volumes and doses, as well as use of androgen-deprivation therapy (ADT). However, the impact of such management changes on the oncological outcomes of PET/CT-based salvage RT strategies is as yet unknown.
Collapse
Affiliation(s)
- Jennifer le Guevelou
- Division of Radiation Oncology, Geneva University Hospital, Geneva, Switzerland.,Division of Radiation Oncology, Centre François Baclesse, Caen, France
| | - Vérane Achard
- Division of Radiation Oncology, Geneva University Hospital, Geneva, Switzerland.,Faculty of Medicine, Geneva University, Geneva, Switzerland
| | - Ismini Mainta
- Division of Nuclear Medicine and Molecular Imaging, Diagnostic Department, Geneva University Hospital, Geneva, Switzerland
| | - Habib Zaidi
- Faculty of Medicine, Geneva University, Geneva, Switzerland.,Division of Nuclear Medicine and Molecular Imaging, Diagnostic Department, Geneva University Hospital, Geneva, Switzerland.,Geneva Neuroscience Center, Geneva University, Geneva, Switzerland.,Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.,Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| | - Valentina Garibotto
- Faculty of Medicine, Geneva University, Geneva, Switzerland.,Division of Nuclear Medicine and Molecular Imaging, Diagnostic Department, Geneva University Hospital, Geneva, Switzerland.,Geneva Neuroscience Center, Geneva University, Geneva, Switzerland
| | - Igor Latorzeff
- Department of Radiation Oncology, Groupe Oncorad-Garonne, Clinique Pasteur, Toulouse, France
| | - Paul Sargos
- Department of Radiation Oncology, Institut Bergonié, Bordeaux, France
| | - Cynthia Ménard
- Department of Radiation Oncology, Centre Hospitalier de l'Université de Montréal (CHUM), Montréal, QC, Canada
| | - Thomas Zilli
- Division of Radiation Oncology, Geneva University Hospital, Geneva, Switzerland.,Faculty of Medicine, Geneva University, Geneva, Switzerland
| |
Collapse
|
28
|
Arabi H, Zaidi H. MRI-guided attenuation correction in torso PET/MRI: Assessment of segmentation-, atlas-, and deep learning-based approaches in the presence of outliers. Magn Reson Med 2021; 87:686-701. [PMID: 34480771 PMCID: PMC9292636 DOI: 10.1002/mrm.29003] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 08/14/2021] [Accepted: 08/21/2021] [Indexed: 12/22/2022]
Abstract
Purpose We compare the performance of three commonly used MRI‐guided attenuation correction approaches in torso PET/MRI, namely segmentation‐, atlas‐, and deep learning‐based algorithms. Methods Twenty‐five co‐registered torso 18F‐FDG PET/CT and PET/MR images were enrolled. PET attenuation maps were generated from in‐phase Dixon MRI using a three‐tissue class segmentation‐based approach (soft‐tissue, lung, and background air), voxel‐wise weighting atlas‐based approach, and a residual convolutional neural network. The bias in standardized uptake value (SUV) was calculated for each approach considering CT‐based attenuation corrected PET images as reference. In addition to the overall performance assessment of these approaches, the primary focus of this work was on recognizing the origins of potential outliers, notably body truncation, metal‐artifacts, abnormal anatomy, and small malignant lesions in the lungs. Results The deep learning approach outperformed both atlas‐ and segmentation‐based methods resulting in less than 4% SUV bias across 25 patients compared to the segmentation‐based method with up to 20% SUV bias in bony structures and the atlas‐based method with 9% bias in the lung. The deep learning‐based method exhibited superior performance. Yet, in case of sever truncation and metallic‐artifacts in the input MRI, this approach was outperformed by the atlas‐based method, exhibiting suboptimal performance in the affected regions. Conversely, for abnormal anatomies, such as a patient presenting with one lung or small malignant lesion in the lung, the deep learning algorithm exhibited promising performance compared to other methods. Conclusion The deep learning‐based method provides promising outcome for synthetic CT generation from MRI. However, metal‐artifact and body truncation should be specifically addressed.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland.,Geneva University Neurocenter, Geneva University, Geneva, Switzerland.,Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.,Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
29
|
Arabi H, Zaidi H. Assessment of deep learning-based PET attenuation correction frameworks in the sinogram domain. Phys Med Biol 2021; 66. [PMID: 34167094 DOI: 10.1088/1361-6560/ac0e79] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Accepted: 06/24/2021] [Indexed: 02/04/2023]
Abstract
This study set out to investigate various deep learning frameworks for PET attenuation correction in the sinogram domain. Different models for both time-of-flight (TOF) and non-TOF PET emission data were implemented, including direct estimation of the attenuation corrected (AC) emission sinograms from the nonAC sinograms, estimation of the attenuation correction factors (ACFs) from PET emission data, correction of scattered photons prior to training of the models, and separate training of the models for each segment of the emission sinograms. A segmentation-based 2-class AC map was included as a bottom-line technique for comparison of the different models considering PET/CT AC as reference. Fifty clinical TOF PET/CT brain scans were employed for training whereas 20 were used for evaluation of the models. Quantitative analysis of the resulting PET images was carried out through region-wise standardized uptake value (SUV) bias calculation. The models relying on TOF information significantly outperformed the nonTOF models as well as the segmentation-based AC map resulting in maximum SUV bias of 6.5%, 9.5%, and 14.0%, respectively. Estimation of ACFs from either TOF or nonTOF PET emission data was very sensitive to prior scatter correction. However, direct estimation of AC sinograms from nonAC sinograms revealed no sensitivity to scatter correction, thus obviating the need for prior scatter estimation. For TOF PET data, though direct prediction of the AC sinograms does not require prior estimation of scattered photons, it requires input/output channels equal to the number of TOF bins which might be computationally or memory-wise expensive. Prediction of the ACF matrices from TOF emission data is less demanding in terms of memory as it requires only a single channel for output. AC in the sinogram domain of TOF PET data exhibited superior performance compared to both nonTOF and segmentation-based methods. However, such models require multiple input/output channels.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland.,Geneva Neuroscience Center, Geneva University, CH-1205 Geneva, Switzerland.,Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700 RB Groningen, The Netherlands.,Department of Nuclear Medicine, University of Southern Denmark, DK-500, Odense, Denmark
| |
Collapse
|
30
|
Arabi H, AkhavanAllaf A, Sanaat A, Shiri I, Zaidi H. The promise of artificial intelligence and deep learning in PET and SPECT imaging. Phys Med 2021; 83:122-137. [DOI: 10.1016/j.ejmp.2021.03.008] [Citation(s) in RCA: 84] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Revised: 02/18/2021] [Accepted: 03/03/2021] [Indexed: 02/06/2023] Open
|