1
|
Bahloul MA, Jabeen S, Benoumhani S, Alsaleh HA, Belkhatir Z, Al‐Wabil A. Advancements in synthetic CT generation from MRI: A review of techniques, and trends in radiation therapy planning. J Appl Clin Med Phys 2024; 25:e14499. [PMID: 39325781 PMCID: PMC11539972 DOI: 10.1002/acm2.14499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Revised: 06/27/2024] [Accepted: 07/26/2024] [Indexed: 09/28/2024] Open
Abstract
BACKGROUND Magnetic resonance imaging (MRI) and Computed tomography (CT) are crucial imaging techniques in both diagnostic imaging and radiation therapy. MRI provides excellent soft tissue contrast but lacks the direct electron density data needed to calculate dosage. CT, on the other hand, remains the gold standard due to its accurate electron density information in radiation therapy planning (RTP) but it exposes patients to ionizing radiation. Synthetic CT (sCT) generation from MRI has been a focused study field in the last few years due to cost effectiveness as well as for the objective of minimizing side-effects of using more than one imaging modality for treatment simulation. It offers significant time and cost efficiencies, bypassing the complexities of co-registration, and potentially improving treatment accuracy by minimizing registration-related errors. In an effort to navigate the quickly developing field of precision medicine, this paper investigates recent advancements in sCT generation techniques, particularly those using machine learning (ML) and deep learning (DL). The review highlights the potential of these techniques to improve the efficiency and accuracy of sCT generation for use in RTP by improving patient care and reducing healthcare costs. The intricate web of sCT generation techniques is scrutinized critically, with clinical implications and technical underpinnings for enhanced patient care revealed. PURPOSE This review aims to provide an overview of the most recent advancements in sCT generation from MRI with a particular focus of its use within RTP, emphasizing on techniques, performance evaluation, clinical applications, future research trends and open challenges in the field. METHODS A thorough search strategy was employed to conduct a systematic literature review across major scientific databases. Focusing on the past decade's advancements, this review critically examines emerging approaches introduced from 2013 to 2023 for generating sCT from MRI, providing a comprehensive analysis of their methodologies, ultimately fostering further advancement in the field. This study highlighted significant contributions, identified challenges, and provided an overview of successes within RTP. Classifying the identified approaches, contrasting their advantages and disadvantages, and identifying broad trends were all part of the review's synthesis process. RESULTS The review identifies various sCT generation approaches, consisting atlas-based, segmentation-based, multi-modal fusion, hybrid approaches, ML and DL-based techniques. These approaches are evaluated for image quality, dosimetric accuracy, and clinical acceptability. They are used for MRI-only radiation treatment, adaptive radiotherapy, and MR/PET attenuation correction. The review also highlights the diversity of methodologies for sCT generation, each with its own advantages and limitations. Emerging trends incorporate the integration of advanced imaging modalities including various MRI sequences like Dixon sequences, T1-weighted (T1W), T2-weighted (T2W), as well as hybrid approaches for enhanced accuracy. CONCLUSIONS The study examines MRI-based sCT generation, to minimize negative effects of acquiring both modalities. The study reviews 2013-2023 studies on MRI to sCT generation methods, aiming to revolutionize RTP by reducing use of ionizing radiation and improving patient outcomes. The review provides insights for researchers and practitioners, emphasizing the need for standardized validation procedures and collaborative efforts to refine methods and address limitations. It anticipates the continued evolution of techniques to improve the precision of sCT in RTP.
Collapse
Affiliation(s)
- Mohamed A. Bahloul
- College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
- Translational Biomedical Engineering Research Lab, College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
| | - Saima Jabeen
- College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
- Translational Biomedical Engineering Research Lab, College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
- AI Research Center, College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
| | - Sara Benoumhani
- College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
- AI Research Center, College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
| | | | - Zehor Belkhatir
- School of Electronics and Computer ScienceUniversity of SouthamptonSouthamptonUK
| | - Areej Al‐Wabil
- College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
- AI Research Center, College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
| |
Collapse
|
2
|
Feuerriegel GC, Sutter R. Managing hardware-related metal artifacts in MRI: current and evolving techniques. Skeletal Radiol 2024; 53:1737-1750. [PMID: 38381196 PMCID: PMC11303499 DOI: 10.1007/s00256-024-04624-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 02/11/2024] [Accepted: 02/12/2024] [Indexed: 02/22/2024]
Abstract
Magnetic resonance imaging (MRI) around metal implants has been challenging due to magnetic susceptibility differences between metal implants and adjacent tissues, resulting in image signal loss, geometric distortion, and loss of fat suppression. These artifacts can compromise the diagnostic accuracy and the evaluation of surrounding anatomical structures. As the prevalence of total joint replacements continues to increase in our aging society, there is a need for proper radiological assessment of tissues around metal implants to aid clinical decision-making in the management of post-operative complaints and complications. Various techniques for reducing metal artifacts in musculoskeletal imaging have been explored in recent years. One approach focuses on improving hardware components. High-density multi-channel radiofrequency (RF) coils, parallel imaging techniques, and gradient warping correction enable signal enhancement, image acquisition acceleration, and geometric distortion minimization. In addition, the use of susceptibility-matched implants and low-field MRI helps to reduce magnetic susceptibility differences. The second approach focuses on metal artifact reduction sequences such as view-angle tilting (VAT) and slice-encoding for metal artifact correction (SEMAC). Iterative reconstruction algorithms, deep learning approaches, and post-processing techniques are used to estimate and correct artifact-related errors in reconstructed images. This article reviews recent developments in clinically applicable metal artifact reduction techniques as well as advances in MR hardware. The review provides a better understanding of the basic principles and techniques, as well as an awareness of their limitations, allowing for a more reasoned application of these methods in clinical settings.
Collapse
Affiliation(s)
- Georg C Feuerriegel
- Department of Radiology, Balgrist University Hospital, Faculty of Medicine, University of Zurich, Forchstrasse 340, 8008, Zurich, Switzerland.
| | - Reto Sutter
- Department of Radiology, Balgrist University Hospital, Faculty of Medicine, University of Zurich, Forchstrasse 340, 8008, Zurich, Switzerland
| |
Collapse
|
3
|
Yang J, Afaq A, Sibley R, McMilan A, Pirasteh A. Deep learning applications for quantitative and qualitative PET in PET/MR: technical and clinical unmet needs. MAGMA (NEW YORK, N.Y.) 2024:10.1007/s10334-024-01199-y. [PMID: 39167304 DOI: 10.1007/s10334-024-01199-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 08/06/2024] [Accepted: 08/08/2024] [Indexed: 08/23/2024]
Abstract
We aim to provide an overview of technical and clinical unmet needs in deep learning (DL) applications for quantitative and qualitative PET in PET/MR, with a focus on attenuation correction, image enhancement, motion correction, kinetic modeling, and simulated data generation. (1) DL-based attenuation correction (DLAC) remains an area of limited exploration for pediatric whole-body PET/MR and lung-specific DLAC due to data shortages and technical limitations. (2) DL-based image enhancement approximating MR-guided regularized reconstruction with a high-resolution MR prior has shown promise in enhancing PET image quality. However, its clinical value has not been thoroughly evaluated across various radiotracers, and applications outside the head may pose challenges due to motion artifacts. (3) Robust training for DL-based motion correction requires pairs of motion-corrupted and motion-corrected PET/MR data. However, these pairs are rare. (4) DL-based approaches can address the limitations of dynamic PET, such as long scan durations that may cause patient discomfort and motion, providing new research opportunities. (5) Monte-Carlo simulations using anthropomorphic digital phantoms can provide extensive datasets to address the shortage of clinical data. This summary of technical/clinical challenges and potential solutions may provide research opportunities for the research community towards the clinical translation of DL solutions.
Collapse
Affiliation(s)
- Jaewon Yang
- Department of Radiology, University of Texas Southwestern, 5323 Harry Hines Blvd., Dallas, TX, USA.
| | - Asim Afaq
- Department of Radiology, University of Texas Southwestern, 5323 Harry Hines Blvd., Dallas, TX, USA
| | - Robert Sibley
- Department of Radiology, University of Texas Southwestern, 5323 Harry Hines Blvd., Dallas, TX, USA
| | - Alan McMilan
- Departments of Radiology and Medical Physics, University of Wisconsin-Madison, 600 Highland Ave, Madison, WI, USA
| | - Ali Pirasteh
- Departments of Radiology and Medical Physics, University of Wisconsin-Madison, 600 Highland Ave, Madison, WI, USA
| |
Collapse
|
4
|
Arabi H, Zaidi H. Contrastive Learning vs. Self-Learning vs. Deformable Data Augmentation in Semantic Segmentation of Medical Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01159-x. [PMID: 38858260 DOI: 10.1007/s10278-024-01159-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Revised: 05/23/2024] [Accepted: 05/24/2024] [Indexed: 06/12/2024]
Abstract
To develop a robust segmentation model, encoding the underlying features/structures of the input data is essential to discriminate the target structure from the background. To enrich the extracted feature maps, contrastive learning and self-learning techniques are employed, particularly when the size of the training dataset is limited. In this work, we set out to investigate the impact of contrastive learning and self-learning on the performance of the deep learning-based semantic segmentation. To this end, three different datasets were employed used for brain tumor and hippocampus delineation from MR images (BraTS and Decathlon datasets, respectively) and kidney segmentation from CT images (Decathlon dataset). Since data augmentation techniques are also aimed at enhancing the performance of deep learning methods, a deformable data augmentation technique was proposed and compared with contrastive learning and self-learning frameworks. The segmentation accuracy for the three datasets was assessed with and without applying data augmentation, contrastive learning, and self-learning to individually investigate the impact of these techniques. The self-learning and deformable data augmentation techniques exhibited comparable performance with Dice indices of 0.913 ± 0.030 and 0.920 ± 0.022 for kidney segmentation, 0.890 ± 0.035 and 0.898 ± 0.027 for hippocampus segmentation, and 0.891 ± 0.045 and 0.897 ± 0.040 for lesion segmentation, respectively. These two approaches significantly outperformed the contrastive learning and the original model with Dice indices of 0.871 ± 0.039 and 0.868 ± 0.042 for kidney segmentation, 0.872 ± 0.045 and 0.865 ± 0.048 for hippocampus segmentation, and 0.870 ± 0.049 and 0.860 ± 0.058 for lesion segmentation, respectively. The combination of self-learning with deformable data augmentation led to a robust segmentation model with no outliers in the outcomes. This work demonstrated the beneficial impact of self-learning and deformable data augmentation on organ and lesion segmentation, where no additional training datasets are needed.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700 RB, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, DK-500, Odense, Denmark.
- University Research and Innovation Center, Óbuda University, Budapest, Hungary.
| |
Collapse
|
5
|
Dehghani F, Karimian A, Arabi H. Joint Brain Tumor Segmentation from Multi-magnetic Resonance Sequences through a Deep Convolutional Neural Network. JOURNAL OF MEDICAL SIGNALS & SENSORS 2024; 14:9. [PMID: 38993203 PMCID: PMC11111160 DOI: 10.4103/jmss.jmss_13_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 07/07/2023] [Accepted: 07/31/2023] [Indexed: 07/13/2024]
Abstract
Background Brain tumor segmentation is highly contributive in diagnosing and treatment planning. Manual brain tumor delineation is a time-consuming and tedious task and varies depending on the radiologist's skill. Automated brain tumor segmentation is of high importance and does not depend on either inter- or intra-observation. The objective of this study is to automate the delineation of brain tumors from the Fluid-attenuated inversion recovery (FLAIR), T1-weighted (T1W), T2-weighted (T2W), and T1W contrast-enhanced (T1ce) magnetic resonance (MR) sequences through a deep learning approach, with a focus on determining which MR sequence alone or which combination thereof would lead to the highest accuracy therein. Methods The BraTS-2020 challenge dataset, containing 370 subjects with four MR sequences and manually delineated tumor masks, is applied to train a residual neural network. This network is trained and assessed separately for each one of the MR sequences (single-channel input) and any combination thereof (dual- or multi-channel input). Results The quantitative assessment of the single-channel models reveals that the FLAIR sequence would yield higher segmentation accuracy compared to its counterparts with a 0.77 ± 0.10 Dice index. As to considering the dual-channel models, the model with FLAIR and T2W inputs yields a 0.80 ± 0.10 Dice index, exhibiting higher performance. The joint tumor segmentation on the entire four MR sequences yields the highest overall segmentation accuracy with a 0.82 ± 0.09 Dice index. Conclusion The FLAIR MR sequence is considered the best choice for tumor segmentation on a single MR sequence, while the joint segmentation on the entire four MR sequences would yield higher tumor delineation accuracy.
Collapse
Affiliation(s)
- Farzaneh Dehghani
- Department of Biomedical Engineering, Faculty of Engineering, University of Isfahan, Isfahan, Iran
| | - Alireza Karimian
- Department of Biomedical Engineering, Faculty of Engineering, University of Isfahan, Isfahan, Iran
| | - Hossein Arabi
- Department of Medical Imaging, Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| |
Collapse
|
6
|
Al-Haj Husain A, Zollinger M, Stadlinger B, Özcan M, Winklhofer S, Al-Haj Husain N, Schönegg D, Piccirelli M, Valdec S. Magnetic resonance imaging in dental implant surgery: a systematic review. Int J Implant Dent 2024; 10:14. [PMID: 38507139 PMCID: PMC10954599 DOI: 10.1186/s40729-024-00532-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 03/05/2024] [Indexed: 03/22/2024] Open
Abstract
PURPOSE To comprehensively assess the existing literature regarding the rapidly evolving in vivo application of magnetic resonance imaging (MRI) for potential applications, benefits, and challenges in dental implant surgery. METHODS Electronic and manual searches were conducted in PubMed MEDLINE, EMBASE, Biosis, and Cochrane databases by two reviewers following the PICOS search strategy. This involved using medical subject headings (MeSH) terms, keywords, and their combinations. RESULTS Sixteen studies were included in this systematic review. Of the 16, nine studies focused on preoperative planning and follow-up phases, four evaluated image-guided implant surgery, while three examined artifact reduction techniques. The current literature highlights several MRI protocols that have recently investigated and evaluated the in vivo feasibility and accuracy, focusing on its potential to provide surgically relevant quantitative and qualitative parameters in the assessment of osseointegration, peri-implant soft tissues, surrounding anatomical structures, reduction of artifacts caused by dental implants, and geometric accuracy relevant to implant placement. Black Bone and MSVAT-SPACE MRI, acquired within a short time, demonstrate improved hard and soft tissue resolution and offer high sensitivity in detecting pathological changes, making them a valuable alternative in targeted cases where CBCT is insufficient. Given the data heterogeneity, a meta-analysis was not possible. CONCLUSIONS The results of this systematic review highlight the potential of dental MRI, within its indications and limitations, to provide perioperative surgically relevant parameters for accurate placement of dental implants.
Collapse
Affiliation(s)
- Adib Al-Haj Husain
- Clinic of Cranio-Maxillofacial and Oral Surgery, Center of Dental Medicine, University of Zurich, Plattenstrasse 11, 8032, Zurich, Switzerland
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Marina Zollinger
- Clinic of Cranio-Maxillofacial and Oral Surgery, Center of Dental Medicine, University of Zurich, Plattenstrasse 11, 8032, Zurich, Switzerland
| | - Bernd Stadlinger
- Clinic of Cranio-Maxillofacial and Oral Surgery, Center of Dental Medicine, University of Zurich, Plattenstrasse 11, 8032, Zurich, Switzerland
| | - Mutlu Özcan
- Clinic of Chewing Function Disturbances and Dental Biomaterials, Center of Dental Medicine, University of Zurich, Zurich, Switzerland
| | | | - Nadin Al-Haj Husain
- Clinic of Chewing Function Disturbances and Dental Biomaterials, Center of Dental Medicine, University of Zurich, Zurich, Switzerland
- Departement of Reconstructive Dentistry and Gerodontology, School of Dental Medicine, University of Bern, Bern, Switzerland
| | - Daphne Schönegg
- Department of Oral and Cranio-Maxillofacial Surgery, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Marco Piccirelli
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Silvio Valdec
- Clinic of Cranio-Maxillofacial and Oral Surgery, Center of Dental Medicine, University of Zurich, Plattenstrasse 11, 8032, Zurich, Switzerland.
| |
Collapse
|
7
|
Jahangir R, Kamali-Asl A, Arabi H, Zaidi H. Strategies for deep learning-based attenuation and scatter correction of brain 18 F-FDG PET images in the image domain. Med Phys 2024; 51:870-880. [PMID: 38197492 DOI: 10.1002/mp.16914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 11/30/2023] [Accepted: 12/06/2023] [Indexed: 01/11/2024] Open
Abstract
BACKGROUND Attenuation and scatter correction is crucial for quantitative positron emission tomography (PET) imaging. Direct attenuation correction (AC) in the image domain using deep learning approaches has been recently proposed for combined PET/MR and standalone PET modalities lacking transmission scanning devices or anatomical imaging. PURPOSE In this study, different input settings were considered in the model training to investigate deep learning-based AC in the image space. METHODS Three different deep learning methods were developed for direct AC in the image space: (i) use of non-attenuation-corrected PET images as input (NonAC-PET), (ii) use of attenuation-corrected PET images with a simple two-class AC map (composed of soft-tissue and background air) obtained from NonAC-PET images (PET segmentation-based AC [SegAC-PET]), and (iii) use of both NonAC-PET and SegAC-PET images in a Double-Channel fashion to predict ground truth attenuation corrected PET images with Computed Tomography images (CTAC-PET). Since a simple two-class AC map (generated from NonAC-PET images) can easily be generated, this work assessed the added value of incorporating SegAC-PET images into direct AC in the image space. A 4-fold cross-validation scheme was adopted to train and evaluate the different models based using 80 brain 18 F-Fluorodeoxyglucose PET/CT images. The voxel-wise and region-wise accuracy of the models were examined via measuring the standardized uptake value (SUV) quantification bias in different regions of the brain. RESULTS The overall root mean square error (RMSE) for the Double-Channel setting was 0.157 ± 0.08 SUV in the whole brain region, while RMSEs of 0.214 ± 0.07 and 0.189 ± 0.14 SUV were observed in NonAC-PET and SegAC-PET models, respectively. A mean SUV bias of 0.01 ± 0.26% was achieved by the Double-Channel model regarding the activity concentration in cerebellum region, as opposed to 0.08 ± 0.28% and 0.05 ± 0.28% SUV biases for the network that uniquely used NonAC-PET or SegAC-PET as input, respectively. SegAC-PET images with an SUV bias of -1.15 ± 0.54%, served as a benchmark for clinically accepted errors. In general, the Double-Channel network, relying on both SegAC-PET and NonAC-PET images, outperformed the other AC models. CONCLUSION Since the generation of two-class AC maps from non-AC PET images is straightforward, the current study investigated the potential added value of incorporating SegAC-PET images into a deep learning-based direct AC approach. Altogether, compared with models that use only NonAC-PET and SegAC-PET images, the Double-Channel deep learning network exhibited superior attenuation correction accuracy.
Collapse
Affiliation(s)
- Reza Jahangir
- Department of Medical Radiation Engineering, Shahid Beheshti University, Tehran, Iran
| | - Alireza Kamali-Asl
- Department of Medical Radiation Engineering, Shahid Beheshti University, Tehran, Iran
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
- Geneva University Neurocenter, Geneva University, Geneva, Switzerland
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
8
|
Shiri I, Salimi Y, Maghsudi M, Jenabi E, Harsini S, Razeghi B, Mostafaei S, Hajianfar G, Sanaat A, Jafari E, Samimi R, Khateri M, Sheikhzadeh P, Geramifar P, Dadgar H, Bitrafan Rajabi A, Assadi M, Bénard F, Vafaei Sadr A, Voloshynovskiy S, Mainta I, Uribe C, Rahmim A, Zaidi H. Differential privacy preserved federated transfer learning for multi-institutional 68Ga-PET image artefact detection and disentanglement. Eur J Nucl Med Mol Imaging 2023; 51:40-53. [PMID: 37682303 PMCID: PMC10684636 DOI: 10.1007/s00259-023-06418-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 08/24/2023] [Indexed: 09/09/2023]
Abstract
PURPOSE Image artefacts continue to pose challenges in clinical molecular imaging, resulting in misdiagnoses, additional radiation doses to patients and financial costs. Mismatch and halo artefacts occur frequently in gallium-68 (68Ga)-labelled compounds whole-body PET/CT imaging. Correcting for these artefacts is not straightforward and requires algorithmic developments, given that conventional techniques have failed to address them adequately. In the current study, we employed differential privacy-preserving federated transfer learning (FTL) to manage clinical data sharing and tackle privacy issues for building centre-specific models that detect and correct artefacts present in PET images. METHODS Altogether, 1413 patients with 68Ga prostate-specific membrane antigen (PSMA)/DOTA-TATE (TOC) PET/CT scans from 3 countries, including 8 different centres, were enrolled in this study. CT-based attenuation and scatter correction (CT-ASC) was used in all centres for quantitative PET reconstruction. Prior to model training, an experienced nuclear medicine physician reviewed all images to ensure the use of high-quality, artefact-free PET images (421 patients' images). A deep neural network (modified U2Net) was trained on 80% of the artefact-free PET images to utilize centre-based (CeBa), centralized (CeZe) and the proposed differential privacy FTL frameworks. Quantitative analysis was performed in 20% of the clean data (with no artefacts) in each centre. A panel of two nuclear medicine physicians conducted qualitative assessment of image quality, diagnostic confidence and image artefacts in 128 patients with artefacts (256 images for CT-ASC and FTL-ASC). RESULTS The three approaches investigated in this study for 68Ga-PET imaging (CeBa, CeZe and FTL) resulted in a mean absolute error (MAE) of 0.42 ± 0.21 (CI 95%: 0.38 to 0.47), 0.32 ± 0.23 (CI 95%: 0.27 to 0.37) and 0.28 ± 0.15 (CI 95%: 0.25 to 0.31), respectively. Statistical analysis using the Wilcoxon test revealed significant differences between the three approaches, with FTL outperforming CeBa and CeZe (p-value < 0.05) in the clean test set. The qualitative assessment demonstrated that FTL-ASC significantly improved image quality and diagnostic confidence and decreased image artefacts, compared to CT-ASC in 68Ga-PET imaging. In addition, mismatch and halo artefacts were successfully detected and disentangled in the chest, abdomen and pelvic regions in 68Ga-PET imaging. CONCLUSION The proposed approach benefits from using large datasets from multiple centres while preserving patient privacy. Qualitative assessment by nuclear medicine physicians showed that the proposed model correctly addressed two main challenging artefacts in 68Ga-PET imaging. This technique could be integrated in the clinic for 68Ga-PET imaging artefact detection and disentanglement using multicentric heterogeneous datasets.
Collapse
Affiliation(s)
- Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
- Department of Cardiology, Inselspital, University of Bern, Bern, Switzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Mehdi Maghsudi
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Elnaz Jenabi
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Sara Harsini
- BC Cancer Research Institute, Vancouver, BC, Canada
| | - Behrooz Razeghi
- Department of Computer Science, University of Geneva, Geneva, Switzerland
| | - Shayan Mostafaei
- Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden
- Department of Medical Epidemiology and Biostatistics, Karolinska Institute, Stockholm, Sweden
| | - Ghasem Hajianfar
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Esmail Jafari
- The Persian Gulf Nuclear Medicine Research Center, Department of Nuclear Medicine, Molecular Imaging, and Theranostics, Bushehr Medical University Hospital, School of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran
| | - Rezvan Samimi
- Department of Medical Radiation Engineering, Shahid Beheshti University, Tehran, Iran
| | - Maziar Khateri
- Department of Medical Radiation Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Peyman Sheikhzadeh
- Department of Nuclear Medicine, Imam Khomeini Hospital Complex, Tehran University of Medical Sciences, Tehran, Iran
| | - Parham Geramifar
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Habibollah Dadgar
- Cancer Research Center, Razavi Hospital, Imam Reza International University, Mashhad, Iran
| | - Ahmad Bitrafan Rajabi
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
- Echocardiography Research Center, Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Majid Assadi
- The Persian Gulf Nuclear Medicine Research Center, Department of Nuclear Medicine, Molecular Imaging, and Theranostics, Bushehr Medical University Hospital, School of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran
| | - François Bénard
- BC Cancer Research Institute, Vancouver, BC, Canada
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
| | - Alireza Vafaei Sadr
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
- Department of Public Health Sciences, College of Medicine, The Pennsylvania State University, Hershey, PA, 17033, USA
| | | | - Ismini Mainta
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Carlos Uribe
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
- Molecular Imaging and Therapy, BC Cancer, Vancouver, BC, Canada
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada
| | - Arman Rahmim
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada
- Department of Physics and Astronomy, University of British Columbia, Vancouver, Canada
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland.
- Geneva University Neuro Center, Geneva University, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
9
|
Shiri I, Salimi Y, Hervier E, Pezzoni A, Sanaat A, Mostafaei S, Rahmim A, Mainta I, Zaidi H. Artificial Intelligence-Driven Single-Shot PET Image Artifact Detection and Disentanglement: Toward Routine Clinical Image Quality Assurance. Clin Nucl Med 2023; 48:1035-1046. [PMID: 37883015 PMCID: PMC10662584 DOI: 10.1097/rlu.0000000000004912] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 08/22/2023] [Indexed: 10/27/2023]
Abstract
PURPOSE Medical imaging artifacts compromise image quality and quantitative analysis and might confound interpretation and misguide clinical decision-making. The present work envisions and demonstrates a new paradigm PET image Quality Assurance NETwork (PET-QA-NET) in which various image artifacts are detected and disentangled from images without prior knowledge of a standard of reference or ground truth for routine PET image quality assurance. METHODS The network was trained and evaluated using training/validation/testing data sets consisting of 669/100/100 artifact-free oncological 18 F-FDG PET/CT images and subsequently fine-tuned and evaluated on 384 (20% for fine-tuning) scans from 8 different PET centers. The developed DL model was quantitatively assessed using various image quality metrics calculated for 22 volumes of interest defined on each scan. In addition, 200 additional 18 F-FDG PET/CT scans (this time with artifacts), generated using both CT-based attenuation and scatter correction (routine PET) and PET-QA-NET, were blindly evaluated by 2 nuclear medicine physicians for the presence of artifacts, diagnostic confidence, image quality, and the number of lesions detected in different body regions. RESULTS Across the volumes of interest of 100 patients, SUV MAE values of 0.13 ± 0.04, 0.24 ± 0.1, and 0.21 ± 0.06 were reached for SUV mean , SUV max , and SUV peak , respectively (no statistically significant difference). Qualitative assessment showed a general trend of improved image quality and diagnostic confidence and reduced image artifacts for PET-QA-NET compared with routine CT-based attenuation and scatter correction. CONCLUSION We developed a highly effective and reliable quality assurance tool that can be embedded routinely to detect and correct for 18 F-FDG PET image artifacts in clinical setting with notably improved PET image quality and quantitative capabilities.
Collapse
Affiliation(s)
- Isaac Shiri
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva
- Department of Cardiology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Yazdan Salimi
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva
| | - Elsa Hervier
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva
| | - Agathe Pezzoni
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva
| | - Amirhossein Sanaat
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva
| | - Shayan Mostafaei
- Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society
- Department of Medical Epidemiology and Biostatistics, Karolinska Institute, Stockholm, Sweden
| | - Arman Rahmim
- Departments of Radiology and Physics, University of British Columbia
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, British Columbia, Canada
| | - Ismini Mainta
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva
| | - Habib Zaidi
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva
- Geneva University Neuro Center, Geneva University, Geneva, Switzerland
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
10
|
Arabi H, Zaidi H. Recent Advances in Positron Emission Tomography/Magnetic Resonance Imaging Technology. Magn Reson Imaging Clin N Am 2023; 31:503-515. [PMID: 37741638 DOI: 10.1016/j.mric.2023.06.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/25/2023]
Abstract
More than a decade has passed since the clinical deployment of the first commercial whole-body hybrid PET/MR scanner in the clinic. The major advantages and limitations of this technology have been investigated from technical and medical perspectives. Despite the remarkable advantages associated with hybrid PET/MR imaging, such as reduced radiation dose and fully simultaneous functional and structural imaging, this technology faced major challenges in terms of mutual interference between MRI and PET components, in addition to the complexity of achieving quantitative imaging owing to the intricate MRI-guided attenuation correction in PET/MRI. In this review, the latest technical developments in PET/MRI technology as well as the state-of-the-art solutions to the major challenges of quantitative PET/MR imaging are discussed.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva 4 CH-1211, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva 4 CH-1211, Switzerland; Geneva University Neurocenter, Geneva University, Geneva CH-1205, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen 9700 RB, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense 500, Denmark.
| |
Collapse
|
11
|
Attenuation correction for PET/MRI to measure tracer activity surrounding total knee arthroplasty. Eur J Hybrid Imaging 2022; 6:31. [PMCID: PMC9637681 DOI: 10.1186/s41824-022-00152-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 10/13/2022] [Indexed: 11/09/2022] Open
Abstract
Background Positron emission tomography (PET) in combination with magnetic resonance imaging (MRI) could allow inflammatory complications near total knee arthroplasty (TKA) to be studied early in their development. However, attenuation of the PET signal by the metal TKA implants imparts substantial error into measurements of tracer activity, and conventional MR-based attenuation correction (AC) methods have large signal voids in the vicinity of metal implants.
Purpose To evaluate a segmentation-based AC approach to measure tracer uptake from PET/MRI scans near TKA implants. Methods A TKA implant (Triathlon, Stryker, Mahwah, USA) was implanted into a cadaver. Four vials were filled with [18F]fluorodeoxyglucose with known activity concentration (4.68 MBq total, 0.76 MBq/mL) and inserted into the knee. Images of the knee were acquired using a 3T PET/MRI system (Biograph mMR, Siemens Healthcare, Erlangen, Germany). Models of the implant components were registered to the MR data using rigid-body transformations and the other tissue classes were manually segmented. These segments were used to create the segmentation-based map and complete the AC. Percentage error of the resulting measured activities was calculated by comparing the measured and known amounts of activity in each vial. Results The original AC resulted in a percentage error of 64.1% from the known total activity. Errors in the individual vial activities ranged from 40.2 to 82.7%. Using the new segmentation-based AC, the percentage error of the total activity decreased to 3.55%. Errors in the individual vials were less than 15%. Conclusions The segmentation-based AC technique dramatically reduced the error in activity measurements that result from PET signal attenuation by the metal TKA implant. This approach may be useful to enhance the reliability of PET/MRI measurements for numerous applications.
Collapse
|
12
|
Sanaat A, Shiri I, Ferdowsi S, Arabi H, Zaidi H. Robust-Deep: A Method for Increasing Brain Imaging Datasets to Improve Deep Learning Models' Performance and Robustness. J Digit Imaging 2022; 35:469-481. [PMID: 35137305 PMCID: PMC9156620 DOI: 10.1007/s10278-021-00536-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Revised: 09/29/2021] [Accepted: 11/08/2021] [Indexed: 12/15/2022] Open
Abstract
A small dataset commonly affects generalization, robustness, and overall performance of deep neural networks (DNNs) in medical imaging research. Since gathering large clinical databases is always difficult, we proposed an analytical method for producing a large realistic/diverse dataset. Clinical brain PET/CT/MR images including full-dose (FD), low-dose (LD) corresponding to only 5 % of events acquired in the FD scan, non-attenuated correction (NAC) and CT-based measured attenuation correction (MAC) PET images, CT images and T1 and T2 MR sequences of 35 patients were included. All images were registered to the Montreal Neurological Institute (MNI) template. Laplacian blending was used to make a natural presentation using information in the frequency domain of images from two separate patients, as well as the blending mask. This classical technique from the computer vision and image processing communities is still widely used and unlike modern DNNs, does not require the availability of training data. A modified ResNet DNN was implemented to evaluate four image-to-image translation tasks, including LD to FD, LD+MR to FD, NAC to MAC, and MRI to CT, with and without using the synthesized images. Quantitative analysis using established metrics, including the peak signal-to-noise ratio (PSNR), structural similarity index metric (SSIM), and joint histogram analysis was performed for quantitative evaluation. The quantitative comparison between the registered small dataset containing 35 patients and the large dataset containing 350 synthesized plus 35 real dataset demonstrated improvement of the RMSE and SSIM by 29% and 8% for LD to FD, 40% and 7% for LD+MRI to FD, 16% and 8% for NAC to MAC, and 24% and 11% for MRI to CT mapping task, respectively. The qualitative/quantitative analysis demonstrated that the proposed model improved the performance of all four DNN models through producing images of higher quality and lower quantitative bias and variance compared to reference images.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Isaac Shiri
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Sohrab Ferdowsi
- University of Applied Sciences and Arts of Western, Geneva, Switzerland
| | - Hossein Arabi
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Habib Zaidi
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland ,grid.8591.50000 0001 2322 4988Geneva University Neurocenter, Geneva University, 1205 Geneva, Switzerland ,grid.4494.d0000 0000 9558 4598Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands ,grid.10825.3e0000 0001 0728 0170Department of Nuclear Medicine, University of Southern Denmark, DK-500 Odense, Denmark
| |
Collapse
|
13
|
Performance of PROPELLER FSE T 2WI in reducing metal artifacts of material porcelain fused to metal crown: a clinical preliminary study. Sci Rep 2022; 12:8442. [PMID: 35589945 PMCID: PMC9120134 DOI: 10.1038/s41598-022-12402-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Accepted: 05/10/2022] [Indexed: 11/27/2022] Open
Abstract
This study aimed to compare MRI quality between conventional fast spin echo T2 weighted imaging (FSE T2WI) with periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) FSE T2WI for patients with various porcelain fused to metal (PFM) crown and analyze the value of PROPELLER technique in reducing metal artifacts. Conventional FSE T2WI and PROPELLER FSE T2WI sequences for axial imaging of head were applied in participants with different PFM crowns: cobalt-chromium (Co–Cr) alloy, pure titanium (Ti), gold–palladium (Au–Pd) alloy. Two radiologists evaluated overall image quality of section in PFM using a 5-point scale qualitatively and measured the maximum artifact area and artifact signal-to-noise ratio (SNR) quantitatively. Fifty-nine participants were evaluated. The metal crown with the least artifacts and the optimum image quality shown in conventional FSE T2WI and PROPELLER FSE T2WI were in Au–Pd alloy, Ti, and Co–Cr alloy order. PROPELLER FSE T2WI was superior to conventional FSE T2WI in improving image quality and reducing artifact area for Co-Cr alloy (17.0 ± 0.2% smaller artifact area, p < 0.001) and Ti (11.6 ± 0.7% smaller artifact area, p = 0.005), but had similar performance compared to FSE T2WI for Au–Pd alloy. The SNRs of the tongue and masseter muscle were significantly higher on PROPELLER FSE T2WI compared with conventional FSE T2WI (tongue: 29.76 ± 8.45 vs. 21.54 ± 9.31, p = 0.007; masseter muscle: 19.11 ± 8.24 vs. 15.26 ± 6.08, p = 0.016). Therefore, the different PFM crown generate varying degrees of metal artifacts in MRI, and the PROPELLER can effectively reduce metal artifacts especially in the PFM crown of Co-Cr alloy.
Collapse
|
14
|
Bahrami A, Karimian A, Arabi H. Comparison of different deep learning architectures for synthetic CT generation from MR images. Phys Med 2021; 90:99-107. [PMID: 34597891 DOI: 10.1016/j.ejmp.2021.09.006] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Revised: 08/12/2021] [Accepted: 09/13/2021] [Indexed: 12/26/2022] Open
Abstract
PURPOSE Among the different available methods for synthetic CT generation from MR images for the task of MR-guided radiation planning, the deep learning algorithms have and do outperform their conventional counterparts. In this study, we investigated the performance of some most popular deep learning architectures including eCNN, U-Net, GAN, V-Net, and Res-Net for the task of sCT generation. As a baseline, an atlas-based method is implemented to which the results of the deep learning-based model are compared. METHODS A dataset consisting of 20 co-registered MR-CT pairs of the male pelvis is applied to assess the different sCT production methods' performance. The mean error (ME), mean absolute error (MAE), Pearson correlation coefficient (PCC), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR) metrics were computed between the estimated sCT and the ground truth (reference) CT images. RESULTS The visual inspection revealed that the sCTs produced by eCNN, V-Net, and ResNet, unlike the other methods, were less noisy and greatly resemble the ground truth CT image. In the whole pelvis region, the eCNN yielded the lowest MAE (26.03 ± 8.85 HU) and ME (0.82 ± 7.06 HU), and the highest PCC metrics were yielded by the eCNN (0.93 ± 0.05) and ResNet (0.91 ± 0.02) methods. The ResNet model had the highest PSNR of 29.38 ± 1.75 among all models. In terms of the Dice similarity coefficient, the eCNN method revealed superior performance in major tissue identification (air, bone, and soft tissue). CONCLUSIONS All in all, the eCNN and ResNet deep learning methods revealed acceptable performance with clinically tolerable quantification errors.
Collapse
Affiliation(s)
- Abbas Bahrami
- Faculty of Physics, University of Isfahan, Isfahan, Iran
| | - Alireza Karimian
- Department of Biomedical Engineering, Faculty of Engineering, University of Isfahan, Isfahan, Iran.
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| |
Collapse
|
15
|
Arabi H, Zaidi H. MRI-guided attenuation correction in torso PET/MRI: Assessment of segmentation-, atlas-, and deep learning-based approaches in the presence of outliers. Magn Reson Med 2021; 87:686-701. [PMID: 34480771 PMCID: PMC9292636 DOI: 10.1002/mrm.29003] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 08/14/2021] [Accepted: 08/21/2021] [Indexed: 12/22/2022]
Abstract
Purpose We compare the performance of three commonly used MRI‐guided attenuation correction approaches in torso PET/MRI, namely segmentation‐, atlas‐, and deep learning‐based algorithms. Methods Twenty‐five co‐registered torso 18F‐FDG PET/CT and PET/MR images were enrolled. PET attenuation maps were generated from in‐phase Dixon MRI using a three‐tissue class segmentation‐based approach (soft‐tissue, lung, and background air), voxel‐wise weighting atlas‐based approach, and a residual convolutional neural network. The bias in standardized uptake value (SUV) was calculated for each approach considering CT‐based attenuation corrected PET images as reference. In addition to the overall performance assessment of these approaches, the primary focus of this work was on recognizing the origins of potential outliers, notably body truncation, metal‐artifacts, abnormal anatomy, and small malignant lesions in the lungs. Results The deep learning approach outperformed both atlas‐ and segmentation‐based methods resulting in less than 4% SUV bias across 25 patients compared to the segmentation‐based method with up to 20% SUV bias in bony structures and the atlas‐based method with 9% bias in the lung. The deep learning‐based method exhibited superior performance. Yet, in case of sever truncation and metallic‐artifacts in the input MRI, this approach was outperformed by the atlas‐based method, exhibiting suboptimal performance in the affected regions. Conversely, for abnormal anatomies, such as a patient presenting with one lung or small malignant lesion in the lung, the deep learning algorithm exhibited promising performance compared to other methods. Conclusion The deep learning‐based method provides promising outcome for synthetic CT generation from MRI. However, metal‐artifact and body truncation should be specifically addressed.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland.,Geneva University Neurocenter, Geneva University, Geneva, Switzerland.,Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.,Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
16
|
Mostafapour S, Gholamiankhah F, Dadgar H, Arabi H, Zaidi H. Feasibility of Deep Learning-Guided Attenuation and Scatter Correction of Whole-Body 68Ga-PSMA PET Studies in the Image Domain. Clin Nucl Med 2021; 46:609-615. [PMID: 33661195 DOI: 10.1097/rlu.0000000000003585] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE This study evaluates the feasibility of direct scatter and attenuation correction of whole-body 68Ga-PSMA PET images in the image domain using deep learning. METHODS Whole-body 68Ga-PSMA PET images of 399 subjects were used to train a residual deep learning model, taking PET non-attenuation-corrected images (PET-nonAC) as input and CT-based attenuation-corrected PET images (PET-CTAC) as target (reference). Forty-six whole-body 68Ga-PSMA PET images were used as an independent validation dataset. For validation, synthetic deep learning-based attenuation-corrected PET images were assessed considering the corresponding PET-CTAC images as reference. The evaluation metrics included the mean absolute error (MAE) of the SUV, peak signal-to-noise ratio, and structural similarity index (SSIM) in the whole body, as well as in different regions of the body, namely, head and neck, chest, and abdomen and pelvis. RESULTS The deep learning-guided direct attenuation and scatter correction produced images of comparable visual quality to PET-CTAC images. It achieved an MAE, relative error (RE%), SSIM, and peak signal-to-noise ratio of 0.91 ± 0.29 (SUV), -2.46% ± 10.10%, 0.973 ± 0.034, and 48.171 ± 2.964, respectively, within whole-body images of the independent external validation dataset. The largest RE% was observed in the head and neck region (-5.62% ± 11.73%), although this region exhibited the highest value of SSIM metric (0.982 ± 0.024). The MAE (SUV) and RE% within the different regions of the body were less than 2.0% and 6%, respectively, indicating acceptable performance of the deep learning model. CONCLUSIONS This work demonstrated the feasibility of direct attenuation and scatter correction of whole-body 68Ga-PSMA PET images in the image domain using deep learning with clinically tolerable errors. The technique has the potential of performing attenuation correction on stand-alone PET or PET/MRI systems.
Collapse
Affiliation(s)
- Samaneh Mostafapour
- From the Department of Radiology Technology, Faculty of Paramedical Sciences, Mashhad University of Medical Sciences, Mashhad
| | - Faeze Gholamiankhah
- Department of Medical Physics, Faculty of Medicine, Shahid Sadoughi University of Medical Sciences, Yazd
| | - Habibollah Dadgar
- Cancer Research Center, Razavi Hospital, Imam Reza International University, Mashhad, Iran
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211 Geneva 4
| | | |
Collapse
|
17
|
Zaidi H, El Naqa I. Quantitative Molecular Positron Emission Tomography Imaging Using Advanced Deep Learning Techniques. Annu Rev Biomed Eng 2021; 23:249-276. [PMID: 33797938 DOI: 10.1146/annurev-bioeng-082420-020343] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The widespread availability of high-performance computing and the popularity of artificial intelligence (AI) with machine learning and deep learning (ML/DL) algorithms at the helm have stimulated the development of many applications involving the use of AI-based techniques in molecular imaging research. Applications reported in the literature encompass various areas, including innovative design concepts in positron emission tomography (PET) instrumentation, quantitative image reconstruction and analysis techniques, computer-aided detection and diagnosis, as well as modeling and prediction of outcomes. This review reflects the tremendous interest in quantitative molecular imaging using ML/DL techniques during the past decade, ranging from the basic principles of ML/DL techniques to the various steps required for obtaining quantitatively accurate PET data, including algorithms used to denoise or correct for physical degrading factors as well as to quantify tracer uptake and metabolic tumor volume for treatment monitoring or radiation therapy treatment planning and response prediction.This review also addresses future opportunities and current challenges facing the adoption of ML/DL approaches and their role in multimodality imaging.
Collapse
Affiliation(s)
- Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211 Geneva, Switzerland; .,Geneva Neuroscience Centre, University of Geneva, 1205 Geneva, Switzerland.,Department of Nuclear Medicine and Molecular Imaging, University of Groningen, 9700 RB Groningen, Netherlands.,Department of Nuclear Medicine, University of Southern Denmark, DK-5000 Odense, Denmark
| | - Issam El Naqa
- Department of Machine Learning, Moffitt Cancer Center, Tampa, Florida 33612, USA.,Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan 48109, USA.,Department of Oncology, McGill University, Montreal, Quebec H3A 1G5, Canada
| |
Collapse
|
18
|
Arabi H, AkhavanAllaf A, Sanaat A, Shiri I, Zaidi H. The promise of artificial intelligence and deep learning in PET and SPECT imaging. Phys Med 2021; 83:122-137. [DOI: 10.1016/j.ejmp.2021.03.008] [Citation(s) in RCA: 84] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Revised: 02/18/2021] [Accepted: 03/03/2021] [Indexed: 02/06/2023] Open
|
19
|
Deep learning-based metal artefact reduction in PET/CT imaging. Eur Radiol 2021; 31:6384-6396. [PMID: 33569626 PMCID: PMC8270868 DOI: 10.1007/s00330-021-07709-z] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Revised: 12/31/2020] [Accepted: 01/21/2021] [Indexed: 12/12/2022]
Abstract
Objectives The susceptibility of CT imaging to metallic objects gives rise to strong streak artefacts and skewed information about the attenuation medium around the metallic implants. This metal-induced artefact in CT images leads to inaccurate attenuation correction in PET/CT imaging. This study investigates the potential of deep learning–based metal artefact reduction (MAR) in quantitative PET/CT imaging. Methods Deep learning–based metal artefact reduction approaches were implemented in the image (DLI-MAR) and projection (DLP-MAR) domains. The proposed algorithms were quantitatively compared to the normalized MAR (NMAR) method using simulated and clinical studies. Eighty metal-free CT images were employed for simulation of metal artefact as well as training and evaluation of the aforementioned MAR approaches. Thirty 18F-FDG PET/CT images affected by the presence of metallic implants were retrospectively employed for clinical assessment of the MAR techniques. Results The evaluation of MAR techniques on the simulation dataset demonstrated the superior performance of the DLI-MAR approach (structural similarity (SSIM) = 0.95 ± 0.2 compared to 0.94 ± 0.2 and 0.93 ± 0.3 obtained using DLP-MAR and NMAR, respectively) in minimizing metal artefacts in CT images. The presence of metallic artefacts in CT images or PET attenuation correction maps led to quantitative bias, image artefacts and under- and overestimation of scatter correction of PET images. The DLI-MAR technique led to a quantitative PET bias of 1.3 ± 3% compared to 10.5 ± 6% without MAR and 3.2 ± 0.5% achieved by NMAR. Conclusion The DLI-MAR technique was able to reduce the adverse effects of metal artefacts on PET images through the generation of accurate attenuation maps from corrupted CT images. Key Points • The presence of metallic objects, such as dental implants, gives rise to severe photon starvation, beam hardening and scattering, thus leading to adverse artefacts in reconstructed CT images. • The aim of this work is to develop and evaluate a deep learning–based MAR to improve CT-based attenuation and scatter correction in PET/CT imaging. • Deep learning–based MAR in the image (DLI-MAR) domain outperformed its counterpart implemented in the projection (DLP-MAR) domain. The DLI-MAR approach minimized the adverse impact of metal artefacts on whole-body PET images through generating accurate attenuation maps from corrupted CT images. Supplementary Information The online version contains supplementary material available at 10.1007/s00330-021-07709-z.
Collapse
|